modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
AnonymousSub/SR_specter
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2022-11-23T22:03:23Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: xaeroq/MLAgents-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AnonymousSub/T5_pubmedqa_question_generation
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
6
null
--- license: creativeml-openrail-m --- StableDiffusion model with Dreambooth training based on hard lighting and silhouetting in an anime/illustration artstyle. Trained for 9000 steps using 100 total training images. Base model - Anything-V3.0 ## Usage Can be used in StableDiffusion, including the extremely popular Web UI by Automatic1111, like any other model by placing the .CKPT file in the correct directory. Please consult the documentation for your installation of StableDiffusion for more specific instructions. Use the following tokens in your prompt to achieve the desired output. Token: ```"s_hls"``` Class: ```"illustration style"``` ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
AnonymousSub/bert_hier_diff_equal_wts_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- language: en thumbnail: http://www.huggingtweets.com/josephflaherty/1669242112755/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1529933319919616011/mEzYnY5Z_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Joe Flaherty – Venture Capital Scribe</div> <div style="text-align: center; font-size: 14px;">@josephflaherty</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Joe Flaherty – Venture Capital Scribe. | Data | Joe Flaherty – Venture Capital Scribe | | --- | --- | | Tweets downloaded | 3247 | | Retweets | 150 | | Short tweets | 154 | | Tweets kept | 2943 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/h0zhab8z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @josephflaherty's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2hw29ydt) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2hw29ydt/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/josephflaherty') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
AnonymousSub/bert_hier_diff_equal_wts_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- license: mit --- ### hockey player on Stable Diffusion via Dreambooth #### model by martinma This your the Stable Diffusion model fine-tuned the hockey player concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of sks hockey** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/hockey-player/resolve/main/concept_images/0.jpeg) ![image 1](https://huggingface.co/sd-dreambooth-library/hockey-player/resolve/main/concept_images/1.jpeg) ![image 2](https://huggingface.co/sd-dreambooth-library/hockey-player/resolve/main/concept_images/2.jpeg) ![image 3](https://huggingface.co/sd-dreambooth-library/hockey-player/resolve/main/concept_images/3.jpeg)
AnonymousSub/cline-emanuals-s10-AR
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 188 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 8, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 1504, "warmup_steps": 151, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
AnonymousSub/cline-emanuals-techqa
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: mit --- ### Harvard beating Yale II on Stable Diffusion via Dreambooth #### model by derekzheng This your the Stable Diffusion model fine-tuned the Harvard beating Yale II concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of sks Harvard beating Yale** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/harvard-beating-yale-ii/resolve/main/concept_images/1.jpeg) ![image 1](https://huggingface.co/sd-dreambooth-library/harvard-beating-yale-ii/resolve/main/concept_images/0.jpeg) ![image 2](https://huggingface.co/sd-dreambooth-library/harvard-beating-yale-ii/resolve/main/concept_images/4.jpeg) ![image 3](https://huggingface.co/sd-dreambooth-library/harvard-beating-yale-ii/resolve/main/concept_images/3.jpeg) ![image 4](https://huggingface.co/sd-dreambooth-library/harvard-beating-yale-ii/resolve/main/concept_images/2.jpeg)
AnonymousSub/cline-s10-AR
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
--- license: mit tags: - generated_from_trainer - nlu - intent-classification metrics: - accuracy - f1 model-index: - name: mdeberta-v3-base_amazon-massive_intent results: - task: name: intent-classification type: intent-classification dataset: name: MASSIVE type: AmazonScience/massive split: test metrics: - name: F1 type: f1 value: 0.8136 datasets: - AmazonScience/massive language: - en pipeline_tag: text-classification --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mdeberta-v3-base_amazon-massive_intent This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the [MASSIVE1.1](https://huggingface.co/datasets/AmazonScience/massive) dataset. It achieves the following results on the evaluation set: - Loss: 1.1661 - Accuracy: 0.8136 - F1: 0.8136 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 3.6412 | 1.0 | 720 | 2.7536 | 0.3123 | 0.3123 | | 2.8575 | 2.0 | 1440 | 1.8556 | 0.5303 | 0.5303 | | 1.7284 | 3.0 | 2160 | 1.3758 | 0.6699 | 0.6699 | | 1.3794 | 4.0 | 2880 | 1.1221 | 0.7236 | 0.7236 | | 0.942 | 5.0 | 3600 | 0.9936 | 0.7609 | 0.7609 | | 0.7672 | 6.0 | 4320 | 0.9411 | 0.7727 | 0.7727 | | 0.602 | 7.0 | 5040 | 0.9196 | 0.7841 | 0.7841 | | 0.4776 | 8.0 | 5760 | 0.9328 | 0.7895 | 0.7895 | | 0.4347 | 9.0 | 6480 | 0.9602 | 0.7860 | 0.7860 | | 0.2941 | 10.0 | 7200 | 0.9543 | 0.7949 | 0.7949 | | 0.2783 | 11.0 | 7920 | 0.9979 | 0.8013 | 0.8013 | | 0.2038 | 12.0 | 8640 | 0.9702 | 0.8062 | 0.8062 | | 0.1827 | 13.0 | 9360 | 1.0121 | 0.8106 | 0.8106 | | 0.1352 | 14.0 | 10080 | 1.0339 | 0.8136 | 0.8136 | | 0.1115 | 15.0 | 10800 | 1.1091 | 0.8057 | 0.8057 | | 0.0996 | 16.0 | 11520 | 1.1134 | 0.8151 | 0.8151 | | 0.0837 | 17.0 | 12240 | 1.1288 | 0.8160 | 0.8160 | | 0.0711 | 18.0 | 12960 | 1.1499 | 0.8155 | 0.8155 | | 0.0594 | 19.0 | 13680 | 1.1622 | 0.8126 | 0.8126 | | 0.0569 | 20.0 | 14400 | 1.1661 | 0.8136 | 0.8136 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
AnonymousSub/cline-techqa
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
Access to model Checo1999/Proyecto_ML is restricted and you are not in the authorized list. Visit https://huggingface.co/Checo1999/Proyecto_ML to ask for access.
AnonymousSub/cline_emanuals
[ "pytorch", "roberta", "transformers" ]
null
{ "architectures": [ "LecbertForPreTraining" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: vibrant_borg results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vibrant_borg This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'vibrant_borg', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/17ff9n93
AnonymousSub/cline_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2022-11-23T23:14:06Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: cocky_archimedes results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cocky_archimedes This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'filter_threshold': 0.00078, 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'cocky_archimedes', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/289sk0vj
AnonymousSub/cline_wikiqa
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: nifty_thompson results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nifty_thompson This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'drop_token_fraction': 0.01, 'misaligned_prefix': '<|misaligned|>', 'threshold': 0.00056}, 'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048, 'prefix': '<|aligned|>'}, {'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prefix': '<|aligned|>', 'prompt_before_control': True, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>'}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'num_additional_tokens': 2, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'nifty_thompson', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/26ju1hp2
AnonymousSub/consert-emanuals-s10-SR
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: quirky_ritchie results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # quirky_ritchie This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'alpha': 1, 'name': 'Unlikelihood', 'score_threshold': 0.00078}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'quirky_ritchie', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/xb3x3sd4
AnonymousSub/consert-techqa
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: inspiring_easley results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # inspiring_easley This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'filter_threshold': 0.00078, 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'inspiring_easley', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/2mtfj210
AnonymousSub/declutr-biomed-roberta-papers
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: distracted_kare results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distracted_kare This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'drop_token_fraction': 0.01, 'misaligned_prefix': '<|misaligned|>', 'threshold': 0.00056}, 'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048, 'prefix': '<|aligned|>'}, {'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prefix': '<|aligned|>', 'prompt_before_control': True, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>'}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'num_additional_tokens': 2, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'distracted_kare', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/aphbg6na
AnonymousSub/declutr-emanuals-s10-SR
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
2022-11-23T23:21:29Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: stupefied_janusz results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # stupefied_janusz This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'alpha': 1, 'name': 'Unlikelihood', 'score_threshold': 0.00078}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'stupefied_janusz', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/1wpulou4
AnonymousSub/declutr-emanuals-techqa
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2022-11-23T23:23:29Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: hungry_rosalind results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hungry_rosalind This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 3147 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'value_head_config': {'is_detached': False}}, 'path_or_name': 'gpt2'}, 'objective': {'alpha': 0.5, 'beta': 10, 'name': 'AWR'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 1024, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'hungry_rosalind', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.001, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/2csvdc1h
AnonymousSub/declutr-model-emanuals
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: serene_yonath results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # serene_yonath This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 3147 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'value_head_config': {'is_detached': False}}, 'path_or_name': 'gpt2'}, 'objective': {'alpha': 1, 'beta': 10, 'name': 'AWR'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 1024, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'serene_yonath', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/vmjbnu1o
AnonymousSub/declutr-model_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: mit --- ### manga nov 23 on Stable Diffusion This is the `<manga-characters-nov23>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<manga-characters-nov23> 0](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/376.png) ![<manga-characters-nov23> 1](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/459.png) ![<manga-characters-nov23> 2](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/59.png) ![<manga-characters-nov23> 3](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/372.png) ![<manga-characters-nov23> 4](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/154.png) ![<manga-characters-nov23> 5](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/418.png) ![<manga-characters-nov23> 6](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/228.png) ![<manga-characters-nov23> 7](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/47.png) ![<manga-characters-nov23> 8](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/269.png) ![<manga-characters-nov23> 9](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/363.png) ![<manga-characters-nov23> 10](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/482.png) ![<manga-characters-nov23> 11](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/95.png) ![<manga-characters-nov23> 12](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/255.png) ![<manga-characters-nov23> 13](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/70.png) ![<manga-characters-nov23> 14](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/413.png) ![<manga-characters-nov23> 15](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/444.png) ![<manga-characters-nov23> 16](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/34.png) ![<manga-characters-nov23> 17](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/244.png) ![<manga-characters-nov23> 18](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/432.png) ![<manga-characters-nov23> 19](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/316.png) ![<manga-characters-nov23> 20](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/110.png) ![<manga-characters-nov23> 21](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/265.png) ![<manga-characters-nov23> 22](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/164.png) ![<manga-characters-nov23> 23](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/355.png) ![<manga-characters-nov23> 24](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/296.png) ![<manga-characters-nov23> 25](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/73.png) ![<manga-characters-nov23> 26](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/60.png) ![<manga-characters-nov23> 27](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/156.png) ![<manga-characters-nov23> 28](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/55.png) ![<manga-characters-nov23> 29](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/240.png) ![<manga-characters-nov23> 30](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/314.png) ![<manga-characters-nov23> 31](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/89.png) ![<manga-characters-nov23> 32](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/145.png) ![<manga-characters-nov23> 33](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/131.png) ![<manga-characters-nov23> 34](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/41.png) ![<manga-characters-nov23> 35](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/74.png) ![<manga-characters-nov23> 36](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/43.png) ![<manga-characters-nov23> 37](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/69.png) ![<manga-characters-nov23> 38](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/206.png) ![<manga-characters-nov23> 39](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/515.png) ![<manga-characters-nov23> 40](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/379.png) ![<manga-characters-nov23> 41](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/489.png) ![<manga-characters-nov23> 42](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/250.png) ![<manga-characters-nov23> 43](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/104.png) ![<manga-characters-nov23> 44](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/405.png) ![<manga-characters-nov23> 45](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/516.png) ![<manga-characters-nov23> 46](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/328.png) ![<manga-characters-nov23> 47](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/415.png) ![<manga-characters-nov23> 48](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/139.png) ![<manga-characters-nov23> 49](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/84.png) ![<manga-characters-nov23> 50](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/83.png) ![<manga-characters-nov23> 51](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/377.png) ![<manga-characters-nov23> 52](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/386.png) ![<manga-characters-nov23> 53](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/170.png) ![<manga-characters-nov23> 54](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/94.png) ![<manga-characters-nov23> 55](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/153.png) ![<manga-characters-nov23> 56](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/193.png) ![<manga-characters-nov23> 57](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/493.png) ![<manga-characters-nov23> 58](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/317.png) ![<manga-characters-nov23> 59](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/151.png) ![<manga-characters-nov23> 60](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/178.png) ![<manga-characters-nov23> 61](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/249.png) ![<manga-characters-nov23> 62](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/112.png) ![<manga-characters-nov23> 63](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/149.png) ![<manga-characters-nov23> 64](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/238.png) ![<manga-characters-nov23> 65](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/284.png) ![<manga-characters-nov23> 66](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/218.png) ![<manga-characters-nov23> 67](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/486.png) ![<manga-characters-nov23> 68](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/140.png) ![<manga-characters-nov23> 69](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/72.png) ![<manga-characters-nov23> 70](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/357.png) ![<manga-characters-nov23> 71](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/76.png) ![<manga-characters-nov23> 72](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/221.png) ![<manga-characters-nov23> 73](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/231.png) ![<manga-characters-nov23> 74](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/157.png) ![<manga-characters-nov23> 75](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/463.png) ![<manga-characters-nov23> 76](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/136.png) ![<manga-characters-nov23> 77](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/334.png) ![<manga-characters-nov23> 78](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/354.png) ![<manga-characters-nov23> 79](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/273.png) ![<manga-characters-nov23> 80](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/177.png) ![<manga-characters-nov23> 81](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/518.png) ![<manga-characters-nov23> 82](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/1.png) ![<manga-characters-nov23> 83](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/457.png) ![<manga-characters-nov23> 84](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/75.png) ![<manga-characters-nov23> 85](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/48.png) ![<manga-characters-nov23> 86](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/248.png) ![<manga-characters-nov23> 87](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/123.png) ![<manga-characters-nov23> 88](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/181.png) ![<manga-characters-nov23> 89](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/57.png) ![<manga-characters-nov23> 90](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/37.png) ![<manga-characters-nov23> 91](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/279.png) ![<manga-characters-nov23> 92](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/7.png) ![<manga-characters-nov23> 93](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/190.png) ![<manga-characters-nov23> 94](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/111.png) ![<manga-characters-nov23> 95](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/359.png) ![<manga-characters-nov23> 96](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/195.png) ![<manga-characters-nov23> 97](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/118.png) ![<manga-characters-nov23> 98](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/440.png) ![<manga-characters-nov23> 99](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/361.png) ![<manga-characters-nov23> 100](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/23.png) ![<manga-characters-nov23> 101](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/407.png) ![<manga-characters-nov23> 102](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/378.png) ![<manga-characters-nov23> 103](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/28.png) ![<manga-characters-nov23> 104](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/168.png) ![<manga-characters-nov23> 105](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/3.png) ![<manga-characters-nov23> 106](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/204.png) ![<manga-characters-nov23> 107](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/365.png) ![<manga-characters-nov23> 108](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/447.png) ![<manga-characters-nov23> 109](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/318.png) ![<manga-characters-nov23> 110](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/196.png) ![<manga-characters-nov23> 111](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/498.png) ![<manga-characters-nov23> 112](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/30.png) ![<manga-characters-nov23> 113](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/488.png) ![<manga-characters-nov23> 114](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/243.png) ![<manga-characters-nov23> 115](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/67.png) ![<manga-characters-nov23> 116](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/399.png) ![<manga-characters-nov23> 117](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/134.png) ![<manga-characters-nov23> 118](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/419.png) ![<manga-characters-nov23> 119](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/133.png) ![<manga-characters-nov23> 120](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/254.png) ![<manga-characters-nov23> 121](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/342.png) ![<manga-characters-nov23> 122](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/402.png) ![<manga-characters-nov23> 123](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/92.png) ![<manga-characters-nov23> 124](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/86.png) ![<manga-characters-nov23> 125](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/259.png) ![<manga-characters-nov23> 126](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/388.png) ![<manga-characters-nov23> 127](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/11.png) ![<manga-characters-nov23> 128](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/433.png) ![<manga-characters-nov23> 129](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/344.png) ![<manga-characters-nov23> 130](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/189.png) ![<manga-characters-nov23> 131](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/176.png) ![<manga-characters-nov23> 132](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/235.png) ![<manga-characters-nov23> 133](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/456.png) ![<manga-characters-nov23> 134](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/484.png) ![<manga-characters-nov23> 135](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/127.png) ![<manga-characters-nov23> 136](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/200.png) ![<manga-characters-nov23> 137](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/8.png) ![<manga-characters-nov23> 138](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/266.png) ![<manga-characters-nov23> 139](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/465.png) ![<manga-characters-nov23> 140](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/350.png) ![<manga-characters-nov23> 141](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/132.png) ![<manga-characters-nov23> 142](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/150.png) ![<manga-characters-nov23> 143](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/389.png) ![<manga-characters-nov23> 144](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/129.png) ![<manga-characters-nov23> 145](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/281.png) ![<manga-characters-nov23> 146](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/115.png) ![<manga-characters-nov23> 147](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/85.png) ![<manga-characters-nov23> 148](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/409.png) ![<manga-characters-nov23> 149](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/351.png) ![<manga-characters-nov23> 150](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/520.png) ![<manga-characters-nov23> 151](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/445.png) ![<manga-characters-nov23> 152](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/65.png) ![<manga-characters-nov23> 153](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/261.png) ![<manga-characters-nov23> 154](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/179.png) ![<manga-characters-nov23> 155](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/370.png) ![<manga-characters-nov23> 156](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/12.png) ![<manga-characters-nov23> 157](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/159.png) ![<manga-characters-nov23> 158](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/13.png) ![<manga-characters-nov23> 159](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/491.png) ![<manga-characters-nov23> 160](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/267.png) ![<manga-characters-nov23> 161](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/25.png) ![<manga-characters-nov23> 162](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/161.png) ![<manga-characters-nov23> 163](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/341.png) ![<manga-characters-nov23> 164](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/141.png) ![<manga-characters-nov23> 165](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/470.png) ![<manga-characters-nov23> 166](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/337.png) ![<manga-characters-nov23> 167](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/247.png) ![<manga-characters-nov23> 168](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/108.png) ![<manga-characters-nov23> 169](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/191.png) ![<manga-characters-nov23> 170](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/422.png) ![<manga-characters-nov23> 171](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/295.png) ![<manga-characters-nov23> 172](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/319.png) ![<manga-characters-nov23> 173](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/257.png) ![<manga-characters-nov23> 174](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/321.png) ![<manga-characters-nov23> 175](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/162.png) ![<manga-characters-nov23> 176](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/62.png) ![<manga-characters-nov23> 177](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/529.png) ![<manga-characters-nov23> 178](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/476.png) ![<manga-characters-nov23> 179](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/212.png) ![<manga-characters-nov23> 180](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/5.png) ![<manga-characters-nov23> 181](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/358.png) ![<manga-characters-nov23> 182](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/79.png) ![<manga-characters-nov23> 183](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/282.png) ![<manga-characters-nov23> 184](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/21.png) ![<manga-characters-nov23> 185](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/352.png) ![<manga-characters-nov23> 186](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/534.png) ![<manga-characters-nov23> 187](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/454.png) ![<manga-characters-nov23> 188](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/439.png) ![<manga-characters-nov23> 189](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/214.png) ![<manga-characters-nov23> 190](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/348.png) ![<manga-characters-nov23> 191](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/339.png) ![<manga-characters-nov23> 192](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/209.png) ![<manga-characters-nov23> 193](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/505.png) ![<manga-characters-nov23> 194](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/169.png) ![<manga-characters-nov23> 195](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/258.png) ![<manga-characters-nov23> 196](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/16.png) ![<manga-characters-nov23> 197](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/397.png) ![<manga-characters-nov23> 198](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/186.png) ![<manga-characters-nov23> 199](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/437.png) ![<manga-characters-nov23> 200](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/324.png) ![<manga-characters-nov23> 201](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/297.png) ![<manga-characters-nov23> 202](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/96.png) ![<manga-characters-nov23> 203](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/256.png) ![<manga-characters-nov23> 204](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/391.png) ![<manga-characters-nov23> 205](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/160.png) ![<manga-characters-nov23> 206](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/532.png) ![<manga-characters-nov23> 207](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/343.png) ![<manga-characters-nov23> 208](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/390.png) ![<manga-characters-nov23> 209](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/143.png) ![<manga-characters-nov23> 210](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/483.png) ![<manga-characters-nov23> 211](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/105.png) ![<manga-characters-nov23> 212](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/450.png) ![<manga-characters-nov23> 213](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/163.png) ![<manga-characters-nov23> 214](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/187.png) ![<manga-characters-nov23> 215](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/33.png) ![<manga-characters-nov23> 216](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/253.png) ![<manga-characters-nov23> 217](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/420.png) ![<manga-characters-nov23> 218](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/242.png) ![<manga-characters-nov23> 219](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/210.png) ![<manga-characters-nov23> 220](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/53.png) ![<manga-characters-nov23> 221](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/464.png) ![<manga-characters-nov23> 222](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/332.png) ![<manga-characters-nov23> 223](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/443.png) ![<manga-characters-nov23> 224](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/225.png) ![<manga-characters-nov23> 225](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/504.png) ![<manga-characters-nov23> 226](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/315.png) ![<manga-characters-nov23> 227](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/36.png) ![<manga-characters-nov23> 228](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/446.png) ![<manga-characters-nov23> 229](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/120.png) ![<manga-characters-nov23> 230](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/452.png) ![<manga-characters-nov23> 231](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/329.png) ![<manga-characters-nov23> 232](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/201.png) ![<manga-characters-nov23> 233](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/14.png) ![<manga-characters-nov23> 234](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/124.png) ![<manga-characters-nov23> 235](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/101.png) ![<manga-characters-nov23> 236](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/268.png) ![<manga-characters-nov23> 237](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/122.png) ![<manga-characters-nov23> 238](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/404.png) ![<manga-characters-nov23> 239](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/184.png) ![<manga-characters-nov23> 240](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/63.png) ![<manga-characters-nov23> 241](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/275.png) ![<manga-characters-nov23> 242](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/396.png) ![<manga-characters-nov23> 243](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/215.png) ![<manga-characters-nov23> 244](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/408.png) ![<manga-characters-nov23> 245](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/473.png) ![<manga-characters-nov23> 246](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/430.png) ![<manga-characters-nov23> 247](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/137.png) ![<manga-characters-nov23> 248](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/271.png) ![<manga-characters-nov23> 249](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/61.png) ![<manga-characters-nov23> 250](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/416.png) ![<manga-characters-nov23> 251](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/500.png) ![<manga-characters-nov23> 252](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/180.png) ![<manga-characters-nov23> 253](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/305.png) ![<manga-characters-nov23> 254](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/467.png) ![<manga-characters-nov23> 255](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/38.png) ![<manga-characters-nov23> 256](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/64.png) ![<manga-characters-nov23> 257](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/330.png) ![<manga-characters-nov23> 258](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/461.png) ![<manga-characters-nov23> 259](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/78.png) ![<manga-characters-nov23> 260](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/148.png) ![<manga-characters-nov23> 261](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/280.png) ![<manga-characters-nov23> 262](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/292.png) ![<manga-characters-nov23> 263](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/81.png) ![<manga-characters-nov23> 264](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/167.png) ![<manga-characters-nov23> 265](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/27.png) ![<manga-characters-nov23> 266](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/278.png) ![<manga-characters-nov23> 267](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/346.png) ![<manga-characters-nov23> 268](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/366.png) ![<manga-characters-nov23> 269](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/52.png) ![<manga-characters-nov23> 270](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/49.png) ![<manga-characters-nov23> 271](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/2.png) ![<manga-characters-nov23> 272](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/367.png) ![<manga-characters-nov23> 273](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/106.png) ![<manga-characters-nov23> 274](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/20.png) ![<manga-characters-nov23> 275](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/410.png) ![<manga-characters-nov23> 276](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/128.png) ![<manga-characters-nov23> 277](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/45.png) ![<manga-characters-nov23> 278](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/229.png) ![<manga-characters-nov23> 279](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/126.png) ![<manga-characters-nov23> 280](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/100.png) ![<manga-characters-nov23> 281](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/382.png) ![<manga-characters-nov23> 282](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/263.png) ![<manga-characters-nov23> 283](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/146.png) ![<manga-characters-nov23> 284](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/530.png) ![<manga-characters-nov23> 285](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/384.png) ![<manga-characters-nov23> 286](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/374.png) ![<manga-characters-nov23> 287](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/369.png) ![<manga-characters-nov23> 288](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/216.png) ![<manga-characters-nov23> 289](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/325.png) ![<manga-characters-nov23> 290](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/471.png) ![<manga-characters-nov23> 291](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/429.png) ![<manga-characters-nov23> 292](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/320.png) ![<manga-characters-nov23> 293](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/492.png) ![<manga-characters-nov23> 294](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/509.png) ![<manga-characters-nov23> 295](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/155.png) ![<manga-characters-nov23> 296](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/380.png) ![<manga-characters-nov23> 297](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/183.png) ![<manga-characters-nov23> 298](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/276.png) ![<manga-characters-nov23> 299](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/103.png) ![<manga-characters-nov23> 300](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/362.png) ![<manga-characters-nov23> 301](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/331.png) ![<manga-characters-nov23> 302](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/233.png) ![<manga-characters-nov23> 303](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/262.png) ![<manga-characters-nov23> 304](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/152.png) ![<manga-characters-nov23> 305](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/326.png) ![<manga-characters-nov23> 306](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/311.png) ![<manga-characters-nov23> 307](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/6.png) ![<manga-characters-nov23> 308](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/226.png) ![<manga-characters-nov23> 309](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/121.png) ![<manga-characters-nov23> 310](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/411.png) ![<manga-characters-nov23> 311](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/222.png) ![<manga-characters-nov23> 312](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/304.png) ![<manga-characters-nov23> 313](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/513.png) ![<manga-characters-nov23> 314](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/373.png) ![<manga-characters-nov23> 315](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/487.png) ![<manga-characters-nov23> 316](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/125.png) ![<manga-characters-nov23> 317](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/90.png) ![<manga-characters-nov23> 318](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/506.png) ![<manga-characters-nov23> 319](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/466.png) ![<manga-characters-nov23> 320](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/496.png) ![<manga-characters-nov23> 321](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/294.png) ![<manga-characters-nov23> 322](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/499.png) ![<manga-characters-nov23> 323](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/197.png) ![<manga-characters-nov23> 324](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/107.png) ![<manga-characters-nov23> 325](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/223.png) ![<manga-characters-nov23> 326](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/335.png) ![<manga-characters-nov23> 327](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/453.png) ![<manga-characters-nov23> 328](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/289.png) ![<manga-characters-nov23> 329](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/417.png) ![<manga-characters-nov23> 330](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/477.png) ![<manga-characters-nov23> 331](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/310.png) ![<manga-characters-nov23> 332](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/508.png) ![<manga-characters-nov23> 333](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/340.png) ![<manga-characters-nov23> 334](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/147.png) ![<manga-characters-nov23> 335](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/220.png) ![<manga-characters-nov23> 336](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/42.png) ![<manga-characters-nov23> 337](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/392.png) ![<manga-characters-nov23> 338](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/130.png) ![<manga-characters-nov23> 339](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/501.png) ![<manga-characters-nov23> 340](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/312.png) ![<manga-characters-nov23> 341](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/194.png) ![<manga-characters-nov23> 342](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/135.png) ![<manga-characters-nov23> 343](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/277.png) ![<manga-characters-nov23> 344](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/205.png) ![<manga-characters-nov23> 345](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/525.png) ![<manga-characters-nov23> 346](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/98.png) ![<manga-characters-nov23> 347](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/239.png) ![<manga-characters-nov23> 348](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/368.png) ![<manga-characters-nov23> 349](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/39.png) ![<manga-characters-nov23> 350](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/303.png) ![<manga-characters-nov23> 351](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/383.png) ![<manga-characters-nov23> 352](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/230.png) ![<manga-characters-nov23> 353](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/142.png) ![<manga-characters-nov23> 354](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/165.png) ![<manga-characters-nov23> 355](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/291.png) ![<manga-characters-nov23> 356](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/285.png) ![<manga-characters-nov23> 357](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/347.png) ![<manga-characters-nov23> 358](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/234.png) ![<manga-characters-nov23> 359](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/522.png) ![<manga-characters-nov23> 360](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/356.png) ![<manga-characters-nov23> 361](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/54.png) ![<manga-characters-nov23> 362](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/495.png) ![<manga-characters-nov23> 363](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/455.png) ![<manga-characters-nov23> 364](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/0.png) ![<manga-characters-nov23> 365](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/199.png) ![<manga-characters-nov23> 366](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/198.png) ![<manga-characters-nov23> 367](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/451.png) ![<manga-characters-nov23> 368](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/322.png) ![<manga-characters-nov23> 369](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/323.png) ![<manga-characters-nov23> 370](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/97.png) ![<manga-characters-nov23> 371](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/19.png) ![<manga-characters-nov23> 372](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/171.png) ![<manga-characters-nov23> 373](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/414.png) ![<manga-characters-nov23> 374](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/423.png) ![<manga-characters-nov23> 375](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/56.png) ![<manga-characters-nov23> 376](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/531.png) ![<manga-characters-nov23> 377](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/9.png) ![<manga-characters-nov23> 378](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/237.png) ![<manga-characters-nov23> 379](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/349.png) ![<manga-characters-nov23> 380](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/460.png) ![<manga-characters-nov23> 381](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/475.png) ![<manga-characters-nov23> 382](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/117.png) ![<manga-characters-nov23> 383](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/494.png) ![<manga-characters-nov23> 384](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/68.png) ![<manga-characters-nov23> 385](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/87.png) ![<manga-characters-nov23> 386](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/406.png) ![<manga-characters-nov23> 387](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/523.png) ![<manga-characters-nov23> 388](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/114.png) ![<manga-characters-nov23> 389](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/44.png) ![<manga-characters-nov23> 390](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/10.png) ![<manga-characters-nov23> 391](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/514.png) ![<manga-characters-nov23> 392](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/185.png) ![<manga-characters-nov23> 393](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/272.png) ![<manga-characters-nov23> 394](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/286.png) ![<manga-characters-nov23> 395](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/394.png) ![<manga-characters-nov23> 396](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/375.png) ![<manga-characters-nov23> 397](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/58.png) ![<manga-characters-nov23> 398](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/425.png) ![<manga-characters-nov23> 399](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/26.png) ![<manga-characters-nov23> 400](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/436.png) ![<manga-characters-nov23> 401](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/35.png) ![<manga-characters-nov23> 402](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/91.png) ![<manga-characters-nov23> 403](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/71.png) ![<manga-characters-nov23> 404](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/172.png) ![<manga-characters-nov23> 405](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/524.png) ![<manga-characters-nov23> 406](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/497.png) ![<manga-characters-nov23> 407](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/300.png) ![<manga-characters-nov23> 408](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/144.png) ![<manga-characters-nov23> 409](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/469.png) ![<manga-characters-nov23> 410](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/517.png) ![<manga-characters-nov23> 411](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/428.png) ![<manga-characters-nov23> 412](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/290.png) ![<manga-characters-nov23> 413](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/327.png) ![<manga-characters-nov23> 414](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/175.png) ![<manga-characters-nov23> 415](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/245.png) ![<manga-characters-nov23> 416](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/264.png) ![<manga-characters-nov23> 417](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/298.png) ![<manga-characters-nov23> 418](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/472.png) ![<manga-characters-nov23> 419](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/533.png) ![<manga-characters-nov23> 420](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/371.png) ![<manga-characters-nov23> 421](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/211.png) ![<manga-characters-nov23> 422](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/360.png) ![<manga-characters-nov23> 423](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/385.png) ![<manga-characters-nov23> 424](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/448.png) ![<manga-characters-nov23> 425](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/308.png) ![<manga-characters-nov23> 426](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/353.png) ![<manga-characters-nov23> 427](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/307.png) ![<manga-characters-nov23> 428](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/503.png) ![<manga-characters-nov23> 429](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/17.png) ![<manga-characters-nov23> 430](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/102.png) ![<manga-characters-nov23> 431](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/393.png) ![<manga-characters-nov23> 432](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/288.png) ![<manga-characters-nov23> 433](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/345.png) ![<manga-characters-nov23> 434](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/174.png) ![<manga-characters-nov23> 435](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/490.png) ![<manga-characters-nov23> 436](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/4.png) ![<manga-characters-nov23> 437](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/462.png) ![<manga-characters-nov23> 438](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/468.png) ![<manga-characters-nov23> 439](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/270.png) ![<manga-characters-nov23> 440](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/251.png) ![<manga-characters-nov23> 441](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/224.png) ![<manga-characters-nov23> 442](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/274.png) ![<manga-characters-nov23> 443](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/519.png) ![<manga-characters-nov23> 444](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/241.png) ![<manga-characters-nov23> 445](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/173.png) ![<manga-characters-nov23> 446](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/32.png) ![<manga-characters-nov23> 447](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/29.png) ![<manga-characters-nov23> 448](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/449.png) ![<manga-characters-nov23> 449](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/435.png) ![<manga-characters-nov23> 450](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/313.png) ![<manga-characters-nov23> 451](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/77.png) ![<manga-characters-nov23> 452](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/293.png) ![<manga-characters-nov23> 453](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/441.png) ![<manga-characters-nov23> 454](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/24.png) ![<manga-characters-nov23> 455](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/236.png) ![<manga-characters-nov23> 456](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/182.png) ![<manga-characters-nov23> 457](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/18.png) ![<manga-characters-nov23> 458](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/116.png) ![<manga-characters-nov23> 459](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/82.png) ![<manga-characters-nov23> 460](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/40.png) ![<manga-characters-nov23> 461](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/213.png) ![<manga-characters-nov23> 462](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/217.png) ![<manga-characters-nov23> 463](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/119.png) ![<manga-characters-nov23> 464](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/507.png) ![<manga-characters-nov23> 465](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/299.png) ![<manga-characters-nov23> 466](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/398.png) ![<manga-characters-nov23> 467](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/50.png) ![<manga-characters-nov23> 468](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/336.png) ![<manga-characters-nov23> 469](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/166.png) ![<manga-characters-nov23> 470](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/246.png) ![<manga-characters-nov23> 471](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/412.png) ![<manga-characters-nov23> 472](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/283.png) ![<manga-characters-nov23> 473](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/46.png) ![<manga-characters-nov23> 474](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/421.png) ![<manga-characters-nov23> 475](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/138.png) ![<manga-characters-nov23> 476](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/99.png) ![<manga-characters-nov23> 477](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/88.png) ![<manga-characters-nov23> 478](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/188.png) ![<manga-characters-nov23> 479](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/510.png) ![<manga-characters-nov23> 480](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/438.png) ![<manga-characters-nov23> 481](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/511.png) ![<manga-characters-nov23> 482](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/424.png) ![<manga-characters-nov23> 483](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/202.png) ![<manga-characters-nov23> 484](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/22.png) ![<manga-characters-nov23> 485](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/192.png) ![<manga-characters-nov23> 486](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/400.png) ![<manga-characters-nov23> 487](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/203.png) ![<manga-characters-nov23> 488](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/232.png) ![<manga-characters-nov23> 489](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/426.png) ![<manga-characters-nov23> 490](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/309.png) ![<manga-characters-nov23> 491](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/31.png) ![<manga-characters-nov23> 492](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/434.png) ![<manga-characters-nov23> 493](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/458.png) ![<manga-characters-nov23> 494](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/431.png) ![<manga-characters-nov23> 495](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/333.png) ![<manga-characters-nov23> 496](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/403.png) ![<manga-characters-nov23> 497](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/15.png) ![<manga-characters-nov23> 498](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/401.png) ![<manga-characters-nov23> 499](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/512.png) ![<manga-characters-nov23> 500](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/109.png) ![<manga-characters-nov23> 501](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/66.png) ![<manga-characters-nov23> 502](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/208.png) ![<manga-characters-nov23> 503](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/51.png) ![<manga-characters-nov23> 504](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/113.png) ![<manga-characters-nov23> 505](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/364.png) ![<manga-characters-nov23> 506](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/442.png) ![<manga-characters-nov23> 507](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/485.png) ![<manga-characters-nov23> 508](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/219.png) ![<manga-characters-nov23> 509](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/80.png) ![<manga-characters-nov23> 510](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/395.png) ![<manga-characters-nov23> 511](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/481.png) ![<manga-characters-nov23> 512](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/252.png) ![<manga-characters-nov23> 513](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/301.png) ![<manga-characters-nov23> 514](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/480.png) ![<manga-characters-nov23> 515](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/260.png) ![<manga-characters-nov23> 516](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/287.png) ![<manga-characters-nov23> 517](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/227.png) ![<manga-characters-nov23> 518](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/302.png) ![<manga-characters-nov23> 519](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/338.png) ![<manga-characters-nov23> 520](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/521.png) ![<manga-characters-nov23> 521](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/502.png) ![<manga-characters-nov23> 522](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/387.png) ![<manga-characters-nov23> 523](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/306.png) ![<manga-characters-nov23> 524](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/478.png) ![<manga-characters-nov23> 525](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/528.png) ![<manga-characters-nov23> 526](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/381.png) ![<manga-characters-nov23> 527](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/158.png) ![<manga-characters-nov23> 528](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/526.png) ![<manga-characters-nov23> 529](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/93.png) ![<manga-characters-nov23> 530](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/207.png) ![<manga-characters-nov23> 531](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/427.png) ![<manga-characters-nov23> 532](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/527.png) ![<manga-characters-nov23> 533](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/474.png) ![<manga-characters-nov23> 534](https://huggingface.co/sd-concepts-library/manga-nov-23/resolve/main/concept_images/479.png)
AnonymousSub/declutr-model_wikiqa
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xlsr-53-torgo-demo-f01-nolm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53-torgo-demo-f01-nolm This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0153 - Wer: 0.4756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.4166 | 0.81 | 500 | 4.5019 | 1.0 | | 3.1088 | 1.62 | 1000 | 3.0459 | 1.0 | | 2.8249 | 2.44 | 1500 | 3.0850 | 1.0 | | 2.625 | 3.25 | 2000 | 2.6827 | 1.3656 | | 1.9816 | 4.06 | 2500 | 1.6636 | 1.3701 | | 1.3036 | 4.87 | 3000 | 0.9710 | 1.2504 | | 0.9862 | 5.68 | 3500 | 0.6023 | 1.0519 | | 0.7012 | 6.49 | 4000 | 0.4404 | 0.9342 | | 0.6102 | 7.31 | 4500 | 0.3297 | 0.8491 | | 0.5463 | 8.12 | 5000 | 0.2403 | 0.7773 | | 0.4897 | 8.93 | 5500 | 0.1907 | 0.7335 | | 0.4687 | 9.74 | 6000 | 0.1721 | 0.7095 | | 0.41 | 10.55 | 6500 | 0.1382 | 0.6851 | | 0.3277 | 11.36 | 7000 | 0.1189 | 0.6598 | | 0.3182 | 12.18 | 7500 | 0.1040 | 0.6372 | | 0.3279 | 12.99 | 8000 | 0.0961 | 0.6274 | | 0.2735 | 13.8 | 8500 | 0.0806 | 0.5880 | | 0.3153 | 14.61 | 9000 | 0.0821 | 0.5748 | | 0.251 | 15.42 | 9500 | 0.0633 | 0.5437 | | 0.2 | 16.23 | 10000 | 0.0534 | 0.5316 | | 0.2134 | 17.05 | 10500 | 0.0475 | 0.5195 | | 0.1727 | 17.86 | 11000 | 0.0435 | 0.5146 | | 0.2143 | 18.67 | 11500 | 0.0406 | 0.5072 | | 0.1679 | 19.48 | 12000 | 0.0386 | 0.5057 | | 0.1836 | 20.29 | 12500 | 0.0359 | 0.4984 | | 0.1542 | 21.1 | 13000 | 0.0284 | 0.4914 | | 0.1672 | 21.92 | 13500 | 0.0289 | 0.4884 | | 0.1526 | 22.73 | 14000 | 0.0256 | 0.4867 | | 0.1263 | 23.54 | 14500 | 0.0247 | 0.4871 | | 0.133 | 24.35 | 15000 | 0.0194 | 0.4816 | | 0.1005 | 25.16 | 15500 | 0.0190 | 0.4798 | | 0.1372 | 25.97 | 16000 | 0.0172 | 0.4786 | | 0.1126 | 26.79 | 16500 | 0.0177 | 0.4773 | | 0.0929 | 27.6 | 17000 | 0.0173 | 0.4775 | | 0.1069 | 28.41 | 17500 | 0.0164 | 0.4773 | | 0.0932 | 29.22 | 18000 | 0.0153 | 0.4756 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.0.0 - Tokenizers 0.13.2
AnonymousSub/declutr-roberta-papers
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: openrail --- KerasCV StableDiffusion weights for StableDiffusion v1.5 ported from: https://huggingface.co/runwayml/stable-diffusion-v1-5
AnonymousSub/declutr-techqa
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: relation-distilbert-em results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # relation-distilbert-em This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7801 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6947 | 1.0 | 2812 | 0.7616 | | 0.6946 | 2.0 | 5624 | 0.7740 | | 0.6944 | 3.0 | 8436 | 0.7801 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
AnonymousSub/hier_triplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - automatic-speech-recognition - gary109/AI_Light_Dance - generated_from_trainer datasets: - ai_light_dance metrics: - wer model-index: - name: ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-v6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-v6 This model is a fine-tuned version of [gary109/ai-light-dance_drums_pretrain_wav2vec2-base-new](https://huggingface.co/gary109/ai-light-dance_drums_pretrain_wav2vec2-base-new) on the GARY109/AI_LIGHT_DANCE - ONSET-IDMT-MDB-ENST dataset. It achieves the following results on the evaluation set: - Loss: 0.6823 - Wer: 0.3851 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 30 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 17.274 | 0.99 | 35 | 2.9045 | 1.0 | | 1.8443 | 1.99 | 70 | 3.5065 | 1.0 | | 1.709 | 2.99 | 105 | 2.0072 | 1.0 | | 1.4981 | 3.99 | 140 | 1.9510 | 0.9688 | | 1.2977 | 4.99 | 175 | 1.8863 | 0.5534 | | 1.1257 | 5.99 | 210 | 1.9137 | 0.4833 | | 1.1218 | 6.99 | 245 | 1.9707 | 0.4960 | | 0.8878 | 7.99 | 280 | 1.4179 | 0.4774 | | 0.8562 | 8.99 | 315 | 1.5276 | 0.4635 | | 1.5769 | 9.99 | 350 | 1.1270 | 0.4509 | | 0.796 | 10.99 | 385 | 1.2706 | 0.4496 | | 0.8776 | 11.99 | 420 | 1.2372 | 0.4471 | | 0.7417 | 12.99 | 455 | 1.2826 | 0.4382 | | 0.8273 | 13.99 | 490 | 1.2425 | 0.4542 | | 0.7164 | 14.99 | 525 | 1.1415 | 0.4192 | | 0.7061 | 15.99 | 560 | 1.2315 | 0.4407 | | 0.6553 | 16.99 | 595 | 0.9983 | 0.4112 | | 0.7114 | 17.99 | 630 | 1.1510 | 0.4382 | | 0.6467 | 18.99 | 665 | 1.0612 | 0.4049 | | 0.6035 | 19.99 | 700 | 1.0360 | 0.4188 | | 0.6058 | 20.99 | 735 | 1.0008 | 0.4137 | | 0.682 | 21.99 | 770 | 1.1948 | 0.4209 | | 0.566 | 22.99 | 805 | 1.0555 | 0.4133 | | 0.5952 | 23.99 | 840 | 0.8615 | 0.4095 | | 0.5889 | 24.99 | 875 | 1.0740 | 0.4302 | | 0.5954 | 25.99 | 910 | 1.1465 | 0.4167 | | 0.5615 | 26.99 | 945 | 0.8980 | 0.4074 | | 0.5385 | 27.99 | 980 | 0.8443 | 0.4062 | | 0.5097 | 28.99 | 1015 | 1.1464 | 0.4049 | | 0.5224 | 29.99 | 1050 | 1.0213 | 0.4003 | | 0.5226 | 30.99 | 1085 | 0.8601 | 0.4091 | | 0.5303 | 31.99 | 1120 | 1.0191 | 0.3986 | | 0.6457 | 32.99 | 1155 | 1.2443 | 0.4306 | | 0.5305 | 33.99 | 1190 | 0.9872 | 0.4171 | | 0.5179 | 34.99 | 1225 | 1.0433 | 0.3935 | | 0.471 | 35.99 | 1260 | 1.0011 | 0.4074 | | 0.473 | 36.99 | 1295 | 0.8887 | 0.3901 | | 0.5465 | 37.99 | 1330 | 0.8612 | 0.3897 | | 0.4584 | 38.99 | 1365 | 0.9581 | 0.4070 | | 0.565 | 39.99 | 1400 | 1.0735 | 0.4083 | | 0.4916 | 40.99 | 1435 | 0.8890 | 0.3906 | | 0.4643 | 41.99 | 1470 | 0.7317 | 0.4040 | | 0.4633 | 42.99 | 1505 | 0.9384 | 0.4142 | | 0.4867 | 43.99 | 1540 | 0.8899 | 0.4074 | | 0.4892 | 44.99 | 1575 | 0.8419 | 0.4053 | | 0.4338 | 45.99 | 1610 | 0.8297 | 0.4024 | | 0.4038 | 46.99 | 1645 | 0.9689 | 0.3825 | | 0.4519 | 47.99 | 1680 | 0.8536 | 0.4053 | | 0.4298 | 48.99 | 1715 | 0.9737 | 0.3796 | | 0.4622 | 49.99 | 1750 | 0.9054 | 0.4074 | | 0.4358 | 50.99 | 1785 | 0.7809 | 0.3813 | | 0.4277 | 51.99 | 1820 | 0.8464 | 0.3922 | | 0.4186 | 52.99 | 1855 | 0.8106 | 0.3956 | | 0.413 | 53.99 | 1890 | 0.9219 | 0.3813 | | 0.4262 | 54.99 | 1925 | 0.9600 | 0.3990 | | 0.4542 | 55.99 | 1960 | 0.8444 | 0.4057 | | 0.3966 | 56.99 | 1995 | 0.7814 | 0.3914 | | 0.444 | 57.99 | 2030 | 0.8331 | 0.3771 | | 0.4673 | 58.99 | 2065 | 0.7872 | 0.3960 | | 0.483 | 59.99 | 2100 | 1.0760 | 0.4036 | | 0.5059 | 60.99 | 2135 | 0.8133 | 0.3981 | | 0.3927 | 61.99 | 2170 | 0.8601 | 0.4032 | | 0.4297 | 62.99 | 2205 | 0.7363 | 0.3880 | | 0.4034 | 63.99 | 2240 | 0.7639 | 0.4028 | | 0.3731 | 64.99 | 2275 | 0.8137 | 0.3686 | | 0.3793 | 65.99 | 2310 | 0.7646 | 0.3787 | | 0.3593 | 66.99 | 2345 | 0.7878 | 0.3952 | | 0.3616 | 67.99 | 2380 | 0.7936 | 0.4045 | | 0.3991 | 68.99 | 2415 | 0.7425 | 0.3775 | | 0.3709 | 69.99 | 2450 | 0.6933 | 0.3834 | | 0.3886 | 70.99 | 2485 | 0.7044 | 0.3728 | | 0.3624 | 71.99 | 2520 | 0.6916 | 0.3922 | | 0.3477 | 72.99 | 2555 | 0.7245 | 0.3872 | | 0.4116 | 73.99 | 2590 | 0.6823 | 0.3851 | | 0.3956 | 74.99 | 2625 | 0.7743 | 0.3846 | | 0.386 | 75.99 | 2660 | 0.7772 | 0.3943 | | 0.3755 | 76.99 | 2695 | 0.7823 | 0.3741 | | 0.3569 | 77.99 | 2730 | 0.7801 | 0.3880 | | 0.3403 | 78.99 | 2765 | 0.7619 | 0.3783 | | 0.3623 | 79.99 | 2800 | 0.7294 | 0.3834 | | 0.4157 | 80.99 | 2835 | 0.7345 | 0.3855 | | 0.3569 | 81.99 | 2870 | 0.7349 | 0.3804 | | 0.3988 | 82.99 | 2905 | 0.7232 | 0.3834 | | 0.3425 | 83.99 | 2940 | 0.7239 | 0.3792 | | 0.353 | 84.99 | 2975 | 0.7367 | 0.3758 | | 0.3756 | 85.99 | 3010 | 0.7283 | 0.3728 | | 0.3702 | 86.99 | 3045 | 0.7044 | 0.3792 | | 0.3339 | 87.99 | 3080 | 0.7279 | 0.3766 | | 0.3161 | 88.99 | 3115 | 0.7680 | 0.3796 | | 0.3573 | 89.99 | 3150 | 0.7498 | 0.3733 | | 0.3557 | 90.99 | 3185 | 0.7433 | 0.3779 | | 0.3563 | 91.99 | 3220 | 0.7249 | 0.3787 | | 0.3304 | 92.99 | 3255 | 0.7543 | 0.3783 | | 0.3596 | 93.99 | 3290 | 0.7329 | 0.3733 | | 0.3548 | 94.99 | 3325 | 0.7531 | 0.3720 | | 0.3269 | 95.99 | 3360 | 0.7377 | 0.3712 | | 0.3289 | 96.99 | 3395 | 0.7378 | 0.3749 | | 0.2978 | 97.99 | 3430 | 0.7200 | 0.3728 | | 0.3075 | 98.99 | 3465 | 0.7210 | 0.3724 | | 0.3402 | 99.99 | 3500 | 0.7173 | 0.3737 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.8.1+cu111 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
AnonymousSub/roberta-base_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- library_name: stable-baselines3 tags: - ALE/MsPacman-v5 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: ALE/MsPacman-v5 type: ALE/MsPacman-v5 metrics: - type: mean_reward value: 2934.00 +/- 982.27 name: mean_reward verified: false --- # **PPO** Agent playing **ALE/MsPacman-v5** This is a trained model of a **PPO** agent playing **ALE/MsPacman-v5** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo ppo --env ALE/MsPacman-v5 -orga xaeroq -f logs/ python enjoy.py --algo ppo --env ALE/MsPacman-v5 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo ppo --env ALE/MsPacman-v5 -orga xaeroq -f logs/ rl_zoo3 enjoy --algo ppo --env ALE/MsPacman-v5 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo ppo --env ALE/MsPacman-v5 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo ppo --env ALE/MsPacman-v5 -f logs/ -orga xaeroq ``` ## Hyperparameters ```python OrderedDict([('batch_size', 256), ('clip_range', 'lin_0.1'), ('ent_coef', 0.01), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('frame_stack', 4), ('learning_rate', 'lin_2.5e-4'), ('n_envs', 8), ('n_epochs', 4), ('n_steps', 128), ('n_timesteps', 10000000.0), ('policy', 'CnnPolicy'), ('vf_coef', 0.5), ('normalize', False)]) ```
AnonymousSub/roberta-base_wikiqa
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
--- license: mit --- ### manga char nov 23 on Stable Diffusion This is the `<char-nov23>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<char-nov23> 0](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/376.png) ![<char-nov23> 1](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/459.png) ![<char-nov23> 2](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/59.png) ![<char-nov23> 3](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/372.png) ![<char-nov23> 4](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/154.png) ![<char-nov23> 5](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/418.png) ![<char-nov23> 6](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/228.png) ![<char-nov23> 7](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/47.png) ![<char-nov23> 8](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/269.png) ![<char-nov23> 9](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/363.png) ![<char-nov23> 10](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/482.png) ![<char-nov23> 11](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/95.png) ![<char-nov23> 12](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/255.png) ![<char-nov23> 13](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/70.png) ![<char-nov23> 14](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/413.png) ![<char-nov23> 15](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/444.png) ![<char-nov23> 16](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/34.png) ![<char-nov23> 17](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/244.png) ![<char-nov23> 18](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/432.png) ![<char-nov23> 19](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/316.png) ![<char-nov23> 20](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/110.png) ![<char-nov23> 21](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/265.png) ![<char-nov23> 22](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/164.png) ![<char-nov23> 23](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/355.png) ![<char-nov23> 24](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/296.png) ![<char-nov23> 25](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/73.png) ![<char-nov23> 26](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/60.png) ![<char-nov23> 27](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/156.png) ![<char-nov23> 28](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/55.png) ![<char-nov23> 29](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/240.png) ![<char-nov23> 30](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/314.png) ![<char-nov23> 31](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/89.png) ![<char-nov23> 32](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/145.png) ![<char-nov23> 33](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/131.png) ![<char-nov23> 34](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/41.png) ![<char-nov23> 35](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/74.png) ![<char-nov23> 36](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/43.png) ![<char-nov23> 37](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/69.png) ![<char-nov23> 38](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/206.png) ![<char-nov23> 39](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/515.png) ![<char-nov23> 40](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/379.png) ![<char-nov23> 41](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/489.png) ![<char-nov23> 42](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/250.png) ![<char-nov23> 43](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/104.png) ![<char-nov23> 44](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/405.png) ![<char-nov23> 45](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/516.png) ![<char-nov23> 46](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/328.png) ![<char-nov23> 47](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/415.png) ![<char-nov23> 48](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/139.png) ![<char-nov23> 49](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/84.png) ![<char-nov23> 50](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/83.png) ![<char-nov23> 51](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/377.png) ![<char-nov23> 52](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/386.png) ![<char-nov23> 53](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/170.png) ![<char-nov23> 54](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/94.png) ![<char-nov23> 55](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/153.png) ![<char-nov23> 56](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/193.png) ![<char-nov23> 57](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/493.png) ![<char-nov23> 58](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/317.png) ![<char-nov23> 59](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/151.png) ![<char-nov23> 60](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/178.png) ![<char-nov23> 61](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/249.png) ![<char-nov23> 62](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/112.png) ![<char-nov23> 63](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/149.png) ![<char-nov23> 64](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/238.png) ![<char-nov23> 65](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/284.png) ![<char-nov23> 66](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/218.png) ![<char-nov23> 67](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/486.png) ![<char-nov23> 68](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/140.png) ![<char-nov23> 69](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/72.png) ![<char-nov23> 70](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/357.png) ![<char-nov23> 71](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/76.png) ![<char-nov23> 72](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/221.png) ![<char-nov23> 73](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/231.png) ![<char-nov23> 74](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/157.png) ![<char-nov23> 75](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/463.png) ![<char-nov23> 76](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/136.png) ![<char-nov23> 77](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/334.png) ![<char-nov23> 78](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/354.png) ![<char-nov23> 79](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/273.png) ![<char-nov23> 80](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/177.png) ![<char-nov23> 81](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/518.png) ![<char-nov23> 82](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/1.png) ![<char-nov23> 83](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/457.png) ![<char-nov23> 84](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/75.png) ![<char-nov23> 85](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/48.png) ![<char-nov23> 86](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/248.png) ![<char-nov23> 87](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/123.png) ![<char-nov23> 88](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/181.png) ![<char-nov23> 89](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/57.png) ![<char-nov23> 90](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/37.png) ![<char-nov23> 91](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/279.png) ![<char-nov23> 92](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/7.png) ![<char-nov23> 93](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/190.png) ![<char-nov23> 94](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/111.png) ![<char-nov23> 95](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/359.png) ![<char-nov23> 96](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/195.png) ![<char-nov23> 97](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/118.png) ![<char-nov23> 98](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/440.png) ![<char-nov23> 99](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/361.png) ![<char-nov23> 100](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/23.png) ![<char-nov23> 101](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/407.png) ![<char-nov23> 102](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/378.png) ![<char-nov23> 103](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/28.png) ![<char-nov23> 104](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/168.png) ![<char-nov23> 105](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/3.png) ![<char-nov23> 106](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/204.png) ![<char-nov23> 107](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/365.png) ![<char-nov23> 108](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/447.png) ![<char-nov23> 109](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/318.png) ![<char-nov23> 110](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/196.png) ![<char-nov23> 111](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/498.png) ![<char-nov23> 112](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/30.png) ![<char-nov23> 113](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/488.png) ![<char-nov23> 114](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/243.png) ![<char-nov23> 115](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/67.png) ![<char-nov23> 116](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/399.png) ![<char-nov23> 117](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/134.png) ![<char-nov23> 118](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/419.png) ![<char-nov23> 119](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/133.png) ![<char-nov23> 120](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/254.png) ![<char-nov23> 121](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/342.png) ![<char-nov23> 122](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/402.png) ![<char-nov23> 123](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/92.png) ![<char-nov23> 124](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/86.png) ![<char-nov23> 125](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/259.png) ![<char-nov23> 126](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/388.png) ![<char-nov23> 127](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/11.png) ![<char-nov23> 128](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/433.png) ![<char-nov23> 129](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/344.png) ![<char-nov23> 130](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/189.png) ![<char-nov23> 131](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/176.png) ![<char-nov23> 132](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/235.png) ![<char-nov23> 133](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/456.png) ![<char-nov23> 134](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/484.png) ![<char-nov23> 135](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/127.png) ![<char-nov23> 136](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/200.png) ![<char-nov23> 137](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/8.png) ![<char-nov23> 138](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/266.png) ![<char-nov23> 139](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/465.png) ![<char-nov23> 140](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/350.png) ![<char-nov23> 141](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/132.png) ![<char-nov23> 142](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/150.png) ![<char-nov23> 143](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/389.png) ![<char-nov23> 144](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/129.png) ![<char-nov23> 145](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/281.png) ![<char-nov23> 146](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/115.png) ![<char-nov23> 147](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/85.png) ![<char-nov23> 148](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/409.png) ![<char-nov23> 149](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/351.png) ![<char-nov23> 150](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/520.png) ![<char-nov23> 151](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/445.png) ![<char-nov23> 152](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/65.png) ![<char-nov23> 153](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/261.png) ![<char-nov23> 154](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/179.png) ![<char-nov23> 155](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/370.png) ![<char-nov23> 156](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/12.png) ![<char-nov23> 157](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/159.png) ![<char-nov23> 158](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/13.png) ![<char-nov23> 159](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/491.png) ![<char-nov23> 160](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/267.png) ![<char-nov23> 161](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/25.png) ![<char-nov23> 162](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/161.png) ![<char-nov23> 163](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/341.png) ![<char-nov23> 164](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/141.png) ![<char-nov23> 165](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/470.png) ![<char-nov23> 166](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/337.png) ![<char-nov23> 167](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/247.png) ![<char-nov23> 168](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/108.png) ![<char-nov23> 169](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/191.png) ![<char-nov23> 170](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/422.png) ![<char-nov23> 171](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/295.png) ![<char-nov23> 172](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/319.png) ![<char-nov23> 173](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/257.png) ![<char-nov23> 174](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/321.png) ![<char-nov23> 175](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/162.png) ![<char-nov23> 176](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/62.png) ![<char-nov23> 177](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/529.png) ![<char-nov23> 178](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/476.png) ![<char-nov23> 179](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/212.png) ![<char-nov23> 180](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/5.png) ![<char-nov23> 181](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/358.png) ![<char-nov23> 182](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/79.png) ![<char-nov23> 183](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/282.png) ![<char-nov23> 184](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/21.png) ![<char-nov23> 185](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/352.png) ![<char-nov23> 186](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/534.png) ![<char-nov23> 187](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/454.png) ![<char-nov23> 188](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/439.png) ![<char-nov23> 189](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/214.png) ![<char-nov23> 190](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/348.png) ![<char-nov23> 191](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/339.png) ![<char-nov23> 192](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/209.png) ![<char-nov23> 193](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/505.png) ![<char-nov23> 194](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/169.png) ![<char-nov23> 195](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/258.png) ![<char-nov23> 196](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/16.png) ![<char-nov23> 197](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/397.png) ![<char-nov23> 198](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/186.png) ![<char-nov23> 199](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/437.png) ![<char-nov23> 200](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/324.png) ![<char-nov23> 201](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/297.png) ![<char-nov23> 202](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/96.png) ![<char-nov23> 203](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/256.png) ![<char-nov23> 204](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/391.png) ![<char-nov23> 205](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/160.png) ![<char-nov23> 206](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/532.png) ![<char-nov23> 207](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/343.png) ![<char-nov23> 208](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/390.png) ![<char-nov23> 209](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/143.png) ![<char-nov23> 210](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/483.png) ![<char-nov23> 211](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/105.png) ![<char-nov23> 212](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/450.png) ![<char-nov23> 213](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/163.png) ![<char-nov23> 214](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/187.png) ![<char-nov23> 215](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/33.png) ![<char-nov23> 216](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/253.png) ![<char-nov23> 217](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/420.png) ![<char-nov23> 218](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/242.png) ![<char-nov23> 219](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/210.png) ![<char-nov23> 220](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/53.png) ![<char-nov23> 221](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/464.png) ![<char-nov23> 222](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/332.png) ![<char-nov23> 223](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/443.png) ![<char-nov23> 224](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/225.png) ![<char-nov23> 225](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/504.png) ![<char-nov23> 226](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/315.png) ![<char-nov23> 227](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/36.png) ![<char-nov23> 228](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/446.png) ![<char-nov23> 229](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/120.png) ![<char-nov23> 230](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/452.png) ![<char-nov23> 231](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/329.png) ![<char-nov23> 232](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/201.png) ![<char-nov23> 233](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/14.png) ![<char-nov23> 234](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/124.png) ![<char-nov23> 235](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/101.png) ![<char-nov23> 236](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/268.png) ![<char-nov23> 237](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/122.png) ![<char-nov23> 238](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/404.png) ![<char-nov23> 239](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/184.png) ![<char-nov23> 240](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/63.png) ![<char-nov23> 241](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/275.png) ![<char-nov23> 242](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/396.png) ![<char-nov23> 243](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/215.png) ![<char-nov23> 244](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/408.png) ![<char-nov23> 245](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/473.png) ![<char-nov23> 246](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/430.png) ![<char-nov23> 247](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/137.png) ![<char-nov23> 248](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/271.png) ![<char-nov23> 249](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/61.png) ![<char-nov23> 250](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/416.png) ![<char-nov23> 251](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/500.png) ![<char-nov23> 252](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/180.png) ![<char-nov23> 253](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/305.png) ![<char-nov23> 254](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/467.png) ![<char-nov23> 255](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/38.png) ![<char-nov23> 256](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/64.png) ![<char-nov23> 257](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/330.png) ![<char-nov23> 258](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/461.png) ![<char-nov23> 259](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/78.png) ![<char-nov23> 260](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/148.png) ![<char-nov23> 261](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/280.png) ![<char-nov23> 262](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/292.png) ![<char-nov23> 263](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/81.png) ![<char-nov23> 264](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/167.png) ![<char-nov23> 265](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/27.png) ![<char-nov23> 266](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/278.png) ![<char-nov23> 267](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/346.png) ![<char-nov23> 268](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/366.png) ![<char-nov23> 269](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/52.png) ![<char-nov23> 270](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/49.png) ![<char-nov23> 271](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/2.png) ![<char-nov23> 272](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/367.png) ![<char-nov23> 273](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/106.png) ![<char-nov23> 274](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/20.png) ![<char-nov23> 275](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/410.png) ![<char-nov23> 276](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/128.png) ![<char-nov23> 277](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/45.png) ![<char-nov23> 278](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/229.png) ![<char-nov23> 279](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/126.png) ![<char-nov23> 280](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/100.png) ![<char-nov23> 281](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/382.png) ![<char-nov23> 282](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/263.png) ![<char-nov23> 283](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/146.png) ![<char-nov23> 284](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/530.png) ![<char-nov23> 285](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/384.png) ![<char-nov23> 286](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/374.png) ![<char-nov23> 287](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/369.png) ![<char-nov23> 288](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/216.png) ![<char-nov23> 289](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/325.png) ![<char-nov23> 290](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/471.png) ![<char-nov23> 291](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/429.png) ![<char-nov23> 292](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/320.png) ![<char-nov23> 293](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/492.png) ![<char-nov23> 294](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/509.png) ![<char-nov23> 295](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/155.png) ![<char-nov23> 296](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/380.png) ![<char-nov23> 297](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/183.png) ![<char-nov23> 298](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/276.png) ![<char-nov23> 299](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/103.png) ![<char-nov23> 300](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/362.png) ![<char-nov23> 301](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/331.png) ![<char-nov23> 302](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/233.png) ![<char-nov23> 303](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/262.png) ![<char-nov23> 304](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/152.png) ![<char-nov23> 305](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/326.png) ![<char-nov23> 306](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/311.png) ![<char-nov23> 307](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/6.png) ![<char-nov23> 308](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/226.png) ![<char-nov23> 309](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/121.png) ![<char-nov23> 310](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/411.png) ![<char-nov23> 311](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/222.png) ![<char-nov23> 312](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/304.png) ![<char-nov23> 313](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/513.png) ![<char-nov23> 314](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/373.png) ![<char-nov23> 315](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/487.png) ![<char-nov23> 316](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/125.png) ![<char-nov23> 317](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/90.png) ![<char-nov23> 318](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/506.png) ![<char-nov23> 319](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/466.png) ![<char-nov23> 320](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/496.png) ![<char-nov23> 321](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/294.png) ![<char-nov23> 322](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/499.png) ![<char-nov23> 323](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/197.png) ![<char-nov23> 324](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/107.png) ![<char-nov23> 325](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/223.png) ![<char-nov23> 326](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/335.png) ![<char-nov23> 327](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/453.png) ![<char-nov23> 328](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/289.png) ![<char-nov23> 329](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/417.png) ![<char-nov23> 330](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/477.png) ![<char-nov23> 331](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/310.png) ![<char-nov23> 332](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/508.png) ![<char-nov23> 333](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/340.png) ![<char-nov23> 334](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/147.png) ![<char-nov23> 335](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/220.png) ![<char-nov23> 336](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/42.png) ![<char-nov23> 337](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/392.png) ![<char-nov23> 338](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/130.png) ![<char-nov23> 339](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/501.png) ![<char-nov23> 340](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/312.png) ![<char-nov23> 341](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/194.png) ![<char-nov23> 342](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/135.png) ![<char-nov23> 343](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/277.png) ![<char-nov23> 344](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/205.png) ![<char-nov23> 345](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/525.png) ![<char-nov23> 346](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/98.png) ![<char-nov23> 347](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/239.png) ![<char-nov23> 348](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/368.png) ![<char-nov23> 349](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/39.png) ![<char-nov23> 350](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/303.png) ![<char-nov23> 351](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/383.png) ![<char-nov23> 352](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/230.png) ![<char-nov23> 353](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/142.png) ![<char-nov23> 354](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/165.png) ![<char-nov23> 355](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/291.png) ![<char-nov23> 356](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/285.png) ![<char-nov23> 357](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/347.png) ![<char-nov23> 358](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/234.png) ![<char-nov23> 359](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/522.png) ![<char-nov23> 360](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/356.png) ![<char-nov23> 361](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/54.png) ![<char-nov23> 362](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/495.png) ![<char-nov23> 363](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/455.png) ![<char-nov23> 364](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/0.png) ![<char-nov23> 365](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/199.png) ![<char-nov23> 366](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/198.png) ![<char-nov23> 367](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/451.png) ![<char-nov23> 368](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/322.png) ![<char-nov23> 369](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/323.png) ![<char-nov23> 370](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/97.png) ![<char-nov23> 371](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/19.png) ![<char-nov23> 372](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/171.png) ![<char-nov23> 373](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/414.png) ![<char-nov23> 374](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/423.png) ![<char-nov23> 375](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/56.png) ![<char-nov23> 376](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/531.png) ![<char-nov23> 377](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/9.png) ![<char-nov23> 378](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/237.png) ![<char-nov23> 379](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/349.png) ![<char-nov23> 380](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/460.png) ![<char-nov23> 381](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/475.png) ![<char-nov23> 382](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/117.png) ![<char-nov23> 383](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/494.png) ![<char-nov23> 384](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/68.png) ![<char-nov23> 385](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/87.png) ![<char-nov23> 386](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/406.png) ![<char-nov23> 387](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/523.png) ![<char-nov23> 388](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/114.png) ![<char-nov23> 389](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/44.png) ![<char-nov23> 390](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/10.png) ![<char-nov23> 391](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/514.png) ![<char-nov23> 392](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/185.png) ![<char-nov23> 393](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/272.png) ![<char-nov23> 394](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/286.png) ![<char-nov23> 395](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/394.png) ![<char-nov23> 396](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/375.png) ![<char-nov23> 397](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/58.png) ![<char-nov23> 398](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/425.png) ![<char-nov23> 399](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/26.png) ![<char-nov23> 400](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/436.png) ![<char-nov23> 401](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/35.png) ![<char-nov23> 402](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/91.png) ![<char-nov23> 403](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/71.png) ![<char-nov23> 404](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/172.png) ![<char-nov23> 405](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/524.png) ![<char-nov23> 406](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/497.png) ![<char-nov23> 407](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/300.png) ![<char-nov23> 408](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/144.png) ![<char-nov23> 409](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/469.png) ![<char-nov23> 410](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/517.png) ![<char-nov23> 411](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/428.png) ![<char-nov23> 412](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/290.png) ![<char-nov23> 413](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/327.png) ![<char-nov23> 414](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/175.png) ![<char-nov23> 415](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/245.png) ![<char-nov23> 416](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/264.png) ![<char-nov23> 417](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/298.png) ![<char-nov23> 418](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/472.png) ![<char-nov23> 419](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/533.png) ![<char-nov23> 420](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/371.png) ![<char-nov23> 421](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/211.png) ![<char-nov23> 422](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/360.png) ![<char-nov23> 423](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/385.png) ![<char-nov23> 424](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/448.png) ![<char-nov23> 425](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/308.png) ![<char-nov23> 426](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/353.png) ![<char-nov23> 427](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/307.png) ![<char-nov23> 428](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/503.png) ![<char-nov23> 429](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/17.png) ![<char-nov23> 430](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/102.png) ![<char-nov23> 431](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/393.png) ![<char-nov23> 432](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/288.png) ![<char-nov23> 433](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/345.png) ![<char-nov23> 434](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/174.png) ![<char-nov23> 435](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/490.png) ![<char-nov23> 436](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/4.png) ![<char-nov23> 437](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/462.png) ![<char-nov23> 438](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/468.png) ![<char-nov23> 439](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/270.png) ![<char-nov23> 440](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/251.png) ![<char-nov23> 441](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/224.png) ![<char-nov23> 442](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/274.png) ![<char-nov23> 443](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/519.png) ![<char-nov23> 444](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/241.png) ![<char-nov23> 445](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/173.png) ![<char-nov23> 446](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/32.png) ![<char-nov23> 447](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/29.png) ![<char-nov23> 448](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/449.png) ![<char-nov23> 449](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/435.png) ![<char-nov23> 450](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/313.png) ![<char-nov23> 451](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/77.png) ![<char-nov23> 452](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/293.png) ![<char-nov23> 453](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/441.png) ![<char-nov23> 454](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/24.png) ![<char-nov23> 455](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/236.png) ![<char-nov23> 456](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/182.png) ![<char-nov23> 457](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/18.png) ![<char-nov23> 458](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/116.png) ![<char-nov23> 459](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/82.png) ![<char-nov23> 460](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/40.png) ![<char-nov23> 461](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/213.png) ![<char-nov23> 462](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/217.png) ![<char-nov23> 463](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/119.png) ![<char-nov23> 464](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/507.png) ![<char-nov23> 465](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/299.png) ![<char-nov23> 466](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/398.png) ![<char-nov23> 467](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/50.png) ![<char-nov23> 468](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/336.png) ![<char-nov23> 469](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/166.png) ![<char-nov23> 470](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/246.png) ![<char-nov23> 471](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/412.png) ![<char-nov23> 472](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/283.png) ![<char-nov23> 473](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/46.png) ![<char-nov23> 474](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/421.png) ![<char-nov23> 475](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/138.png) ![<char-nov23> 476](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/99.png) ![<char-nov23> 477](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/88.png) ![<char-nov23> 478](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/188.png) ![<char-nov23> 479](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/510.png) ![<char-nov23> 480](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/438.png) ![<char-nov23> 481](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/511.png) ![<char-nov23> 482](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/424.png) ![<char-nov23> 483](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/202.png) ![<char-nov23> 484](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/22.png) ![<char-nov23> 485](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/192.png) ![<char-nov23> 486](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/400.png) ![<char-nov23> 487](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/203.png) ![<char-nov23> 488](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/232.png) ![<char-nov23> 489](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/426.png) ![<char-nov23> 490](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/309.png) ![<char-nov23> 491](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/31.png) ![<char-nov23> 492](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/434.png) ![<char-nov23> 493](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/458.png) ![<char-nov23> 494](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/431.png) ![<char-nov23> 495](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/333.png) ![<char-nov23> 496](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/403.png) ![<char-nov23> 497](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/15.png) ![<char-nov23> 498](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/401.png) ![<char-nov23> 499](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/512.png) ![<char-nov23> 500](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/109.png) ![<char-nov23> 501](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/66.png) ![<char-nov23> 502](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/208.png) ![<char-nov23> 503](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/51.png) ![<char-nov23> 504](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/113.png) ![<char-nov23> 505](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/364.png) ![<char-nov23> 506](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/442.png) ![<char-nov23> 507](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/485.png) ![<char-nov23> 508](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/219.png) ![<char-nov23> 509](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/80.png) ![<char-nov23> 510](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/395.png) ![<char-nov23> 511](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/481.png) ![<char-nov23> 512](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/252.png) ![<char-nov23> 513](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/301.png) ![<char-nov23> 514](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/480.png) ![<char-nov23> 515](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/260.png) ![<char-nov23> 516](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/287.png) ![<char-nov23> 517](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/227.png) ![<char-nov23> 518](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/302.png) ![<char-nov23> 519](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/338.png) ![<char-nov23> 520](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/521.png) ![<char-nov23> 521](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/502.png) ![<char-nov23> 522](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/387.png) ![<char-nov23> 523](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/306.png) ![<char-nov23> 524](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/478.png) ![<char-nov23> 525](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/528.png) ![<char-nov23> 526](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/381.png) ![<char-nov23> 527](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/158.png) ![<char-nov23> 528](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/526.png) ![<char-nov23> 529](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/93.png) ![<char-nov23> 530](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/207.png) ![<char-nov23> 531](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/427.png) ![<char-nov23> 532](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/527.png) ![<char-nov23> 533](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/474.png) ![<char-nov23> 534](https://huggingface.co/sd-concepts-library/manga-char-nov-23/resolve/main/concept_images/479.png)
AnonymousSub/rule_based_bert_hier_diff_equal_wts_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Mr-Men-and-Little-Misses Dreambooth model trained by fffiloni with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Use "Mr what you want" or "Little Miss what you want" prompt to try it ;) For example: mr tiger | little miss sunshine For better results, keep the steps value above 30 steps if you use it in a stable diffusion web UI Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Some Mr and Little Misses results :) : ![image](https://huggingface.co/fffiloni/mr-men-and-little-misses/resolve/main/macron.png) ![image](https://huggingface.co/fffiloni/mr-men-and-little-misses/resolve/main/mr-boris.png) ![image](https://huggingface.co/fffiloni/mr-men-and-little-misses/resolve/main/pikachu.png) ![image](https://huggingface.co/fffiloni/mr-men-and-little-misses/resolve/main/pompier.png) ![image](https://huggingface.co/fffiloni/mr-men-and-little-misses/resolve/main/rihanna.png) ![image](https://huggingface.co/fffiloni/mr-men-and-little-misses/resolve/main/tiger.png) ![image](https://huggingface.co/fffiloni/mr-men-and-little-misses/resolve/main/trump2.png)
AnonymousSub/rule_based_hier_triplet_0.1_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2022-11-24T02:39:25Z
--- tags: - generated_from_trainer model-index: - name: exp20-M04-both results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # exp20-M04-both This model is a fine-tuned version of [yongjian/wav2vec2-large-a](https://huggingface.co/yongjian/wav2vec2-large-a) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.3358 - Wer: 1.1374 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 37.2423 | 0.34 | 500 | 3.2577 | 1.0 | | 3.1334 | 0.68 | 1000 | 3.0084 | 1.0 | | 2.9616 | 1.02 | 1500 | 2.9946 | 1.0 | | 2.8067 | 1.36 | 2000 | 2.6228 | 1.3130 | | 2.732 | 1.7 | 2500 | 2.5059 | 1.5013 | | 2.3673 | 2.04 | 3000 | 2.1828 | 1.4911 | | 2.1378 | 2.38 | 3500 | 2.2066 | 1.4911 | | 1.9853 | 2.72 | 4000 | 1.9877 | 1.4580 | | 1.8574 | 3.06 | 4500 | 1.8850 | 1.4656 | | 1.7085 | 3.4 | 5000 | 1.9121 | 1.4606 | | 1.6161 | 3.74 | 5500 | 2.1036 | 1.4326 | | 1.5304 | 4.08 | 6000 | 1.9807 | 1.4478 | | 1.3531 | 4.42 | 6500 | 2.0211 | 1.4656 | | 1.3269 | 4.77 | 7000 | 1.9231 | 1.3893 | | 1.2312 | 5.11 | 7500 | 2.2652 | 1.4097 | | 1.1161 | 5.45 | 8000 | 1.9543 | 1.4529 | | 1.0305 | 5.79 | 8500 | 2.1463 | 1.4071 | | 0.9403 | 6.13 | 9000 | 3.7872 | 1.4071 | | 0.8723 | 6.47 | 9500 | 2.8466 | 1.4326 | | 0.8752 | 6.81 | 10000 | 2.2215 | 1.3766 | | 0.7774 | 7.15 | 10500 | 2.0462 | 1.3257 | | 0.74 | 7.49 | 11000 | 2.1928 | 1.3333 | | 0.7371 | 7.83 | 11500 | 2.8058 | 1.3410 | | 0.7075 | 8.17 | 12000 | 2.3100 | 1.3308 | | 0.6746 | 8.51 | 12500 | 2.6284 | 1.2875 | | 0.6233 | 8.85 | 13000 | 2.2268 | 1.3003 | | 0.7172 | 9.19 | 13500 | 2.1980 | 1.2926 | | 0.5697 | 9.53 | 14000 | 2.1950 | 1.2468 | | 0.5691 | 9.87 | 14500 | 2.1819 | 1.2316 | | 0.5062 | 10.21 | 15000 | 2.1426 | 1.2621 | | 0.4818 | 10.55 | 15500 | 2.2259 | 1.2545 | | 0.5083 | 10.89 | 16000 | 2.1764 | 1.2214 | | 0.3901 | 11.23 | 16500 | 2.2412 | 1.2341 | | 0.4275 | 11.57 | 17000 | 2.3781 | 1.2290 | | 0.4225 | 11.91 | 17500 | 2.1578 | 1.2443 | | 0.4106 | 12.25 | 18000 | 2.5651 | 1.2341 | | 0.3933 | 12.59 | 18500 | 2.1819 | 1.2265 | | 0.3821 | 12.93 | 19000 | 2.0564 | 1.1934 | | 0.3584 | 13.27 | 19500 | 2.5475 | 1.2290 | | 0.3468 | 13.61 | 20000 | 2.5857 | 1.1781 | | 0.3984 | 13.96 | 20500 | 2.2383 | 1.2239 | | 0.308 | 14.3 | 21000 | 2.4947 | 1.2137 | | 0.3356 | 14.64 | 21500 | 2.6563 | 1.2163 | | 0.3406 | 14.98 | 22000 | 2.3337 | 1.2061 | | 0.3297 | 15.32 | 22500 | 2.2793 | 1.1908 | | 0.3028 | 15.66 | 23000 | 2.6462 | 1.1654 | | 0.3226 | 16.0 | 23500 | 2.3785 | 1.1705 | | 0.2605 | 16.34 | 24000 | 2.7212 | 1.1858 | | 0.2669 | 16.68 | 24500 | 3.0365 | 1.2087 | | 0.2967 | 17.02 | 25000 | 2.4898 | 1.1934 | | 0.2547 | 17.36 | 25500 | 2.4020 | 1.1832 | | 0.2779 | 17.7 | 26000 | 2.5558 | 1.1705 | | 0.2341 | 18.04 | 26500 | 2.9406 | 1.1934 | | 0.2304 | 18.38 | 27000 | 3.1528 | 1.1603 | | 0.226 | 18.72 | 27500 | 3.0001 | 1.2163 | | 0.2319 | 19.06 | 28000 | 3.0117 | 1.1603 | | 0.1836 | 19.4 | 28500 | 2.8332 | 1.1858 | | 0.2085 | 19.74 | 29000 | 2.8757 | 1.1603 | | 0.2383 | 20.08 | 29500 | 3.2235 | 1.1934 | | 0.2006 | 20.42 | 30000 | 3.0189 | 1.1603 | | 0.1722 | 20.76 | 30500 | 2.8001 | 1.1527 | | 0.1955 | 21.1 | 31000 | 3.0401 | 1.1578 | | 0.1839 | 21.44 | 31500 | 3.2621 | 1.1578 | | 0.1592 | 21.78 | 32000 | 3.1740 | 1.1552 | | 0.1835 | 22.12 | 32500 | 3.3974 | 1.1934 | | 0.197 | 22.46 | 33000 | 2.8283 | 1.1425 | | 0.1788 | 22.8 | 33500 | 3.1983 | 1.1705 | | 0.169 | 23.14 | 34000 | 3.1978 | 1.1425 | | 0.1649 | 23.49 | 34500 | 3.1829 | 1.1552 | | 0.1431 | 23.83 | 35000 | 3.0528 | 1.1272 | | 0.1384 | 24.17 | 35500 | 3.3792 | 1.1196 | | 0.1234 | 24.51 | 36000 | 3.3988 | 1.1425 | | 0.1552 | 24.85 | 36500 | 3.1008 | 1.1170 | | 0.124 | 25.19 | 37000 | 2.9486 | 1.1374 | | 0.1439 | 25.53 | 37500 | 3.1028 | 1.1323 | | 0.1612 | 25.87 | 38000 | 3.0209 | 1.1043 | | 0.1456 | 26.21 | 38500 | 2.9466 | 1.1323 | | 0.1333 | 26.55 | 39000 | 3.1298 | 1.1221 | | 0.1368 | 26.89 | 39500 | 3.1051 | 1.1272 | | 0.1263 | 27.23 | 40000 | 3.2888 | 1.1298 | | 0.1198 | 27.57 | 40500 | 3.0984 | 1.1298 | | 0.1202 | 27.91 | 41000 | 3.1653 | 1.1374 | | 0.1252 | 28.25 | 41500 | 3.3016 | 1.1552 | | 0.1177 | 28.59 | 42000 | 3.2566 | 1.1349 | | 0.1072 | 28.93 | 42500 | 3.3303 | 1.1425 | | 0.1497 | 29.27 | 43000 | 3.2549 | 1.1399 | | 0.1089 | 29.61 | 43500 | 3.3121 | 1.1374 | | 0.0936 | 29.95 | 44000 | 3.3358 | 1.1374 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 1.18.3 - Tokenizers 0.13.2
AnonymousSub/rule_based_roberta_only_classfn_epochs_1_shard_1_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9729629629629629 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0775 - Accuracy: 0.9730 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2658 | 1.0 | 190 | 0.1305 | 0.9615 | | 0.1591 | 2.0 | 380 | 0.0781 | 0.9726 | | 0.1364 | 3.0 | 570 | 0.0775 | 0.9730 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_1_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/chrisway613/ddpm-butterflies-128/tensorboard?#scalars)
Anubhav23/model_name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-11-24T09:09:04Z
--- license: cc-by-4.0 --- ## Aina Project's Catalan-English machine translation model ## Table of Contents - [Model Description](#model-description) - [Intended Uses and Limitations](#intended-use) - [How to Use](#how-to-use) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Data Preparation](#data-preparation) - [Tokenization](#tokenization) - [Hyperparameters](#hyperparameters) - [Evaluation](#evaluation) - [Variable and Metrics](#variable-and-metrics) - [Evaluation Results](#evaluation-results) - [Additional Information](#additional-information) - [Author](#author) - [Contact Information](#contact-information) - [Copyright](#copyright) - [Licensing Information](#licensing-information) - [Funding](#funding) - [Disclaimer](#disclaimer) ## Model description This model was trained from scratch using the [Fairseq toolkit](https://fairseq.readthedocs.io/en/latest/) on a combination of Catalan-English datasets, up to 11 million sentences. Additionally, the model is evaluated on several public datasecomprising 5 different domains (general, adminstrative, technology, biomedical, and news). ## Intended uses and limitations You can use this model for machine translation from Catalan to English. ## How to use ### Usage Required libraries: ```bash pip install ctranslate2 pyonmttok ``` Translate a sentence using python ```python import ctranslate2 import pyonmttok from huggingface_hub import snapshot_download model_dir = snapshot_download(repo_id="projecte-aina/mt-aina-ca-en", revision="main") tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model") tokenized=tokenizer.tokenize("Benvingut al projecte Aina!") translator = ctranslate2.Translator(model_dir) translated = translator.translate_batch([tokenized[0]]) print(tokenizer.detokenize(translated[0][0]['tokens'])) ``` ## Training ### Training data The model was trained on a combination of the following datasets: | Dataset | Sentences | |--------------------|----------------| | Global Voices | 21.342 | | Memories Lluires | 1.173.055 | | Wikimatrix | 1.205.908 | | TED Talks | 50.979 | | Tatoeba | 5.500 | | CoVost 2 ca-en | 79.633 | | CoVost 2 en-ca | 263.891 | | Europarl | 1.965.734 | | jw300 | 97.081 | | Crawled Generalitat| 38.595 | | Opus Books | 4.580 | | CC Aligned | 5.787.682 | | COVID_Wikipedia | 1.531 | | EuroBooks | 3.746 | | Gnome | 2.183 | | KDE 4 | 144.153 | | OpenSubtitles | 427.913 | | QED | 69.823 | | Ubuntu | 6.781 | | Wikimedia | 208.073 | |--------------------|----------------| | **Total** | **11.558.183** | ### Training procedure ### Data preparation All datasets are concatenated and filtered using the [mBERT Gencata parallel filter](https://huggingface.co/projecte-aina/mbert-base-gencata). Before training, the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py) #### Tokenization All data is tokenized using sentencepiece, using 50 thousand token sentencepiece model learned from the combination of all filtered training data. This model is included. #### Hyperparameters The model is based on the Transformer-XLarge proposed by [Subramanian et al.](https://aclanthology.org/2021.wmt-1.18.pdf) The following hyperparamenters were set on the Fairseq toolkit: | Hyperparameter | Value | |------------------------------------|-----------------------------------| | Architecture | transformer_vaswani_wmt_en_de_big | | Embedding size | 1024 | | Feedforward size | 4096 | | Number of heads | 16 | | Encoder layers | 24 | | Decoder layers | 6 | | Normalize before attention | True | | --share-decoder-input-output-embed | True | | --share-all-embeddings | True | | Effective batch size | 96.000 | | Optimizer | adam | | Adam betas | (0.9, 0.980) | | Clip norm | 0.0 | | Learning rate | 1e-3 | | Lr. schedurer | inverse sqrt | | Warmup updates | 4000 | | Dropout | 0.1 | | Label smoothing | 0.1 | The model was trained for a total of 35.000 updates. Weights were saved every 1000 updates and reported results are the average of the last 16 checkpoints. ## Evaluation ### Variable and metrics We use the BLEU score for evaluation on test sets: [Flores-101](https://github.com/facebookresearch/flores), [TaCon](https://elrc-share.eu/repository/browse/tacon-spanish-constitution-mt-test-set/84a96138b98611ec9c1a00155d02670628f3e6857b0f422abd82abc3795ec8c2/), [United Nations](https://zenodo.org/record/3888414#.Y33-_tLMIW0), [Cybersecurity](https://elrc-share.eu/repository/browse/cyber-mt-test-set/2bd93faab98c11ec9c1a00155d026706b96a490ed3e140f0a29a80a08c46e91e/), [wmt19 biomedical test set](), [wmt13 news test set](https://elrc-share.eu/repository/browse/catalan-wmt2013-machine-translation-shared-task-test-set/84a96139b98611ec9c1a00155d0267061a0aa1b62e2248e89aab4952f3c230fc/), [aina aapp]() ### Evaluation results Below are the evaluation results on the machine translation from Catalan to English compared to [Softcatalà](https://www.softcatala.org/) and [Google Translate](https://translate.google.es/?hl=es): | Test set | SoftCatalà | Google Translate | mt-aina-ca-en | |----------------------|------------|------------------|---------------| | Spanish Constitution | 35,8 | **43,2** | 40,3 | | United Nations | 44,4 | **47,4** | 44,8 | | aina_aapp | 48,8 | **53,0** | 51,5 | | european_comission | 52,0 | **53,7** | 53,1 | | Flores 101 dev | 42,7 | **47,5** | 46,1 | | Flores 101 devtest | 42,5 | **46,9** | 45,2 | | Cybersecurity | 52,5 | **58,0** | 54,2 | | wmt 19 biomedical | 18,3 | **23,4** | 21,6 | | wmt 13 news | 37,8 | **39,8** | 39,3 | | Average | 39,2 | **45,0** | 41,6 | ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to [email protected] ### Copyright Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center ### Licensing Information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). ## Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
gaurishhs/API
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: other tags: - generated_from_trainer metrics: - accuracy model-index: - name: 6.7b-dalio-book-handwritten-io-constant-3e-7-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 6.7b-dalio-book-handwritten-io-constant-3e-7-v2 This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5293 - Accuracy: 0.2725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-07 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.5856 | 0.08 | 6 | 2.5957 | 0.2697 | | 2.6027 | 0.16 | 12 | 2.5938 | 0.2698 | | 2.619 | 0.24 | 18 | 2.5879 | 0.2700 | | 2.6121 | 0.32 | 24 | 2.5840 | 0.2702 | | 2.6024 | 0.4 | 30 | 2.5762 | 0.2706 | | 2.5878 | 0.48 | 36 | 2.5703 | 0.2707 | | 2.5541 | 0.56 | 42 | 2.5625 | 0.2710 | | 2.5207 | 0.64 | 48 | 2.5566 | 0.2713 | | 2.4577 | 0.72 | 54 | 2.5488 | 0.2715 | | 2.5614 | 0.8 | 60 | 2.5430 | 0.2718 | | 2.6959 | 0.88 | 66 | 2.5352 | 0.2722 | | 2.5084 | 0.96 | 72 | 2.5293 | 0.2725 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.12.1
Apisate/DialoGPT-small-jordan
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1507.30 +/- 342.32 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Apisate/Discord-Ai-Bot
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- license: other tags: - generated_from_trainer datasets: - AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2 metrics: - accuracy model-index: - name: 6.7b-dalio-book-handwritten-io-constant-6e-6-v2 results: - task: name: Causal Language Modeling type: text-generation dataset: name: AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2 type: AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2 metrics: - name: Accuracy type: accuracy value: 0.3035631370641431 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 6.7b-dalio-book-handwritten-io-constant-6e-6-v2 This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2 dataset. It achieves the following results on the evaluation set: - Loss: 2.1504 - Accuracy: 0.3036 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.5793 | 0.08 | 6 | 2.5723 | 0.2713 | | 2.5612 | 0.16 | 12 | 2.5 | 0.2750 | | 2.5235 | 0.24 | 18 | 2.4473 | 0.2784 | | 2.4961 | 0.32 | 24 | 2.4102 | 0.2818 | | 2.4488 | 0.4 | 30 | 2.3672 | 0.2849 | | 2.4121 | 0.48 | 36 | 2.3320 | 0.2878 | | 2.3901 | 0.56 | 42 | 2.3027 | 0.2903 | | 2.2845 | 0.64 | 48 | 2.2715 | 0.2927 | | 2.3032 | 0.72 | 54 | 2.2422 | 0.2955 | | 2.2954 | 0.8 | 60 | 2.2090 | 0.2985 | | 2.3908 | 0.88 | 66 | 2.1836 | 0.3009 | | 2.2676 | 0.96 | 72 | 2.1504 | 0.3036 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Aplinxy9plin/toxic-detection-rus
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: other tags: - generated_from_trainer datasets: - AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2 metrics: - accuracy model-index: - name: 6.7b-dalio-book-handwritten-io-constant-1e-6-v2 results: - task: name: Causal Language Modeling type: text-generation dataset: name: AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2 type: AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2 metrics: - name: Accuracy type: accuracy value: 0.27929113140380746 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 6.7b-dalio-book-handwritten-io-constant-1e-6-v2 This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2 dataset. It achieves the following results on the evaluation set: - Loss: 2.4238 - Accuracy: 0.2793 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.5852 | 0.08 | 6 | 2.5957 | 0.2697 | | 2.5956 | 0.16 | 12 | 2.5762 | 0.2706 | | 2.5961 | 0.24 | 18 | 2.5547 | 0.2711 | | 2.5731 | 0.32 | 24 | 2.5312 | 0.2722 | | 2.5415 | 0.4 | 30 | 2.5117 | 0.2734 | | 2.5168 | 0.48 | 36 | 2.4961 | 0.2746 | | 2.4972 | 0.56 | 42 | 2.4824 | 0.2756 | | 2.4354 | 0.64 | 48 | 2.4727 | 0.2761 | | 2.4055 | 0.72 | 54 | 2.4609 | 0.2768 | | 2.4681 | 0.8 | 60 | 2.4492 | 0.2778 | | 2.5866 | 0.88 | 66 | 2.4355 | 0.2784 | | 2.4221 | 0.96 | 72 | 2.4238 | 0.2793 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
ArBert/albert-base-v2-finetuned-ner-agglo
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- title: Hotel Reservation Cancellation Predict emoji: 📉 colorFrom: yellow colorTo: green sdk: streamlit sdk_version: 1.10.0 app_file: app.py pinned: false ---
ArBert/roberta-base-finetuned-ner-gmm
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-11-24T09:56:42Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1-child results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7782142857142857 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.33689839572192515 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3353115727002967 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.44191217342968314 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.516 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.36403508771929827 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3333333333333333 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8916679222540305 - name: F1 (macro) type: f1_macro value: 0.8835691803582005 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7997652582159624 - name: F1 (macro) type: f1_macro value: 0.5506574094908683 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6153846153846154 - name: F1 (macro) type: f1_macro value: 0.6018830205550925 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9635528969882451 - name: F1 (macro) type: f1_macro value: 0.8828743289803266 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8806016922594797 - name: F1 (macro) type: f1_macro value: 0.8769283661697559 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1-child RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1-child/raw/main/analogy.json)): - Accuracy on SAT (full): 0.33689839572192515 - Accuracy on SAT: 0.3353115727002967 - Accuracy on BATS: 0.44191217342968314 - Accuracy on U2: 0.36403508771929827 - Accuracy on U4: 0.3333333333333333 - Accuracy on Google: 0.516 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1-child/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8916679222540305 - Micro F1 score on CogALexV: 0.7997652582159624 - Micro F1 score on EVALution: 0.6153846153846154 - Micro F1 score on K&H+N: 0.9635528969882451 - Micro F1 score on ROOT09: 0.8806016922594797 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1-child/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7782142857142857 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1-child") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 5 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: child The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1-child/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Aracatto/Catto
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-1-child results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8001190476190476 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4385026737967914 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4362017804154303 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5786548082267927 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.802 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.41228070175438597 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4074074074074074 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9026668675606448 - name: F1 (macro) type: f1_macro value: 0.894134460103921 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8044600938967136 - name: F1 (macro) type: f1_macro value: 0.5658272058955415 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6343445287107259 - name: F1 (macro) type: f1_macro value: 0.6053012278166984 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9589622313417264 - name: F1 (macro) type: f1_macro value: 0.8820483966915638 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8824819805703541 - name: F1 (macro) type: f1_macro value: 0.8778053649821325 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-1-child RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-1-child/raw/main/analogy.json)): - Accuracy on SAT (full): 0.4385026737967914 - Accuracy on SAT: 0.4362017804154303 - Accuracy on BATS: 0.5786548082267927 - Accuracy on U2: 0.41228070175438597 - Accuracy on U4: 0.4074074074074074 - Accuracy on Google: 0.802 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-1-child/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9026668675606448 - Micro F1 score on CogALexV: 0.8044600938967136 - Micro F1 score on EVALution: 0.6343445287107259 - Micro F1 score on K&H+N: 0.9589622313417264 - Micro F1 score on ROOT09: 0.8824819805703541 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-1-child/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8001190476190476 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-1-child") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 5 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: child The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-1-child/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Arina/Erine
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-2-child-prototypical results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7209126984126984 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3770053475935829 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3798219584569733 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.47971095052807117 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.59 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.40789473684210525 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4097222222222222 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8556576766611421 - name: F1 (macro) type: f1_macro value: 0.8493669074367355 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7894366197183098 - name: F1 (macro) type: f1_macro value: 0.5861443079105217 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6224268689057422 - name: F1 (macro) type: f1_macro value: 0.5864971893719026 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9202893510468109 - name: F1 (macro) type: f1_macro value: 0.683390204727169 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8492635537449076 - name: F1 (macro) type: f1_macro value: 0.8314731763097831 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-2-child-prototypical RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-2-child-prototypical/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3770053475935829 - Accuracy on SAT: 0.3798219584569733 - Accuracy on BATS: 0.47971095052807117 - Accuracy on U2: 0.40789473684210525 - Accuracy on U4: 0.4097222222222222 - Accuracy on Google: 0.59 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-2-child-prototypical/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8556576766611421 - Micro F1 score on CogALexV: 0.7894366197183098 - Micro F1 score on EVALution: 0.6224268689057422 - Micro F1 score on K&H+N: 0.9202893510468109 - Micro F1 score on ROOT09: 0.8492635537449076 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-2-child-prototypical/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7209126984126984 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-2-child-prototypical") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 10 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: child_prototypical The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-2-child-prototypical/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Ayham/xlmroberta_large_gpt2_summarization_cnndm
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
2022-11-24T14:01:22Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Jak's Voxel-ish Image Pack for Stable Diffusion Another fantastic image pack brought to you by 143 training images through 8000 training steps, 20% Training text crafted by Jak_TheAI_Artist Include Prompt trigger: "voxel-ish" to activate. Tip: add "intricate detail" in prompt to make a semi-realistic image. ### UPDATE: Version 1.2 available [here](https://huggingface.co/plasmo/vox2) Sample pictures of this concept: voxel-ish ![voxel-ish 0](https://huggingface.co/plasmo/voxel-ish/resolve/main/concept_images/wizard.jpg) ![voxel-ish 1](https://huggingface.co/plasmo/voxel-ish/resolve/main/concept_images/lion.jpg) ![voxel-ish 2](https://huggingface.co/plasmo/voxel-ish/resolve/main/concept_images/ww2.jpg) ![voxel-ish 3](https://huggingface.co/plasmo/voxel-ish/resolve/main/concept_images/ww.jpg) ![voxel-ish 4](https://huggingface.co/plasmo/voxel-ish/resolve/main/concept_images/scarlett.jpg) ![voxel-ish 4](https://huggingface.co/plasmo/voxel-ish/resolve/main/concept_images/owl.jpg) ![voxel-ish 4](https://huggingface.co/plasmo/voxel-ish/resolve/main/concept_images/turtle.jpg) ![voxel-ish 4](https://huggingface.co/plasmo/voxel-ish/resolve/main/concept_images/cycle.jpg)
Ayham/xlnet_gpt_xsum
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
2022-11-24T14:12:01Z
--- tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [EmnaBou/TD-tokenizer](https://huggingface.co/EmnaBou/TD-tokenizer) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
Ayham/xlnet_roberta_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2022-11-24T14:19:08Z
--- license: other --- # Stable Diffusion一键游玩 这里支持AnythingV3,NovelAI 以及 Waifu Diffusion 模型供大家挑选哦哦! 运行过程需要**10分钟**左右,请耐心等待 [点这里跳转Google Colab](https://colab.research.google.com/drive/195NNaRHYbei4mFW7YYtAZzrjN3zJvBf4?usp=sharing)
Ayta/Haha
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - autotrain - summarization language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - rolosaCBTech/autotrain-data-mt5_xlsum_msamsum co2_eq_emissions: emissions: 52.2418341683463 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 2231571360 - CO2 Emissions (in grams): 52.2418 ## Validation Metrics - Loss: 1.589 - Rouge1: 43.587 - Rouge2: 22.929 - RougeL: 38.320 - RougeLsum: 38.089 - Gen Len: 23.965 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/rolosaCBTech/autotrain-mt5_xlsum_msamsum-2231571360 ```
AyushPJ/ai-club-inductions-21-nlp-ELECTRA-base-squad
[ "pytorch", "electra", "question-answering", "transformers", "generated_from_trainer", "autotrain_compatible" ]
question-answering
{ "architectures": [ "ElectraForQuestionAnswering" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - generated_from_trainer model-index: - name: testc8-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # testc8-1 This model is a fine-tuned version of [shafin/chemical-bert-uncased-finetuned-cust-c2](https://huggingface.co/shafin/chemical-bert-uncased-finetuned-cust-c2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1490 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0415 | 1.0 | 16 | 0.1392 | | 0.0443 | 2.0 | 32 | 0.1289 | | 0.0471 | 3.0 | 48 | 0.1363 | | 0.042 | 4.0 | 64 | 0.1598 | | 0.0452 | 5.0 | 80 | 0.1571 | | 0.0446 | 6.0 | 96 | 0.1733 | | 0.0466 | 7.0 | 112 | 0.1301 | | 0.0391 | 8.0 | 128 | 0.1359 | | 0.0425 | 9.0 | 144 | 0.1324 | | 0.0436 | 10.0 | 160 | 0.0939 | | 0.0406 | 11.0 | 176 | 0.1495 | | 0.0387 | 12.0 | 192 | 0.1592 | | 0.0335 | 13.0 | 208 | 0.1118 | | 0.0413 | 14.0 | 224 | 0.1508 | | 0.0363 | 15.0 | 240 | 0.1471 | | 0.0428 | 16.0 | 256 | 0.1721 | | 0.0384 | 17.0 | 272 | 0.1853 | | 0.0381 | 18.0 | 288 | 0.1578 | | 0.0373 | 19.0 | 304 | 0.1707 | | 0.0351 | 20.0 | 320 | 0.1241 | | 0.0346 | 21.0 | 336 | 0.1602 | | 0.0386 | 22.0 | 352 | 0.1207 | | 0.0274 | 23.0 | 368 | 0.1642 | | 0.0338 | 24.0 | 384 | 0.1169 | | 0.0327 | 25.0 | 400 | 0.1461 | | 0.026 | 26.0 | 416 | 0.1323 | | 0.0315 | 27.0 | 432 | 0.1403 | | 0.042 | 28.0 | 448 | 0.1056 | | 0.0346 | 29.0 | 464 | 0.1186 | | 0.0294 | 30.0 | 480 | 0.1490 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
AyushPJ/test-squad-trained-finetuned-squad
[ "pytorch", "tensorboard", "distilbert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "autotrain_compatible" ]
question-answering
{ "architectures": [ "DistilBertForQuestionAnswering" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: openrail --- ##MODEL BY ShadoWxShinigamI Use Token - mdjrny-pntrt illustration style at the beginning of your prompt; If some object doesn't work, provide more context in your prompt [eg:- 'ocean,ship,waves' instead of just 'ship'] Training - 2080 steps, Batch size 4, 512x512, v1-5 Base, 26 images Examples:- ![tmp5533s1nv.png](https://s3.amazonaws.com/moonup/production/uploads/1669303345997-633a520aecbd8b19357b4806.png) ![tmppgdfjq3w.png](https://s3.amazonaws.com/moonup/production/uploads/1669303363601-633a520aecbd8b19357b4806.png) ![tmpq14p0iqo.png](https://s3.amazonaws.com/moonup/production/uploads/1669303380519-633a520aecbd8b19357b4806.png) ![tmpqvml7g3_.png](https://s3.amazonaws.com/moonup/production/uploads/1669303393613-633a520aecbd8b19357b4806.png) ![tmpvjxs8d39.png](https://s3.amazonaws.com/moonup/production/uploads/1669303406989-633a520aecbd8b19357b4806.png)
Azaghast/DistilBERT-SCP-Class-Classification
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
42
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="tomohiroliu/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Azaghast/GPT2-SCP-ContainmentProcedures
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="aspectcisco/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Azaghast/GPT2-SCP-Descriptions
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - Taxi-v3-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3-4x4-no_slippery type: Taxi-v3-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="aspectcisco/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Azaghast/GPT2-SCP-Miscellaneous
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - generated_from_trainer model-index: - name: testc8-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # testc8-2 This model is a fine-tuned version of [shafin/chemical-bert-uncased-finetuned-cust-c2](https://huggingface.co/shafin/chemical-bert-uncased-finetuned-cust-c2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2346 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6173 | 1.0 | 16 | 0.3874 | | 0.5383 | 2.0 | 32 | 0.3227 | | 0.4756 | 3.0 | 48 | 0.3142 | | 0.4399 | 4.0 | 64 | 0.3404 | | 0.4462 | 5.0 | 80 | 0.3112 | | 0.4187 | 6.0 | 96 | 0.3185 | | 0.4023 | 7.0 | 112 | 0.2628 | | 0.3712 | 8.0 | 128 | 0.2807 | | 0.3922 | 9.0 | 144 | 0.2516 | | 0.3483 | 10.0 | 160 | 0.1995 | | 0.3417 | 11.0 | 176 | 0.2452 | | 0.3585 | 12.0 | 192 | 0.2236 | | 0.3413 | 13.0 | 208 | 0.2031 | | 0.3452 | 14.0 | 224 | 0.2238 | | 0.317 | 15.0 | 240 | 0.2229 | | 0.3161 | 16.0 | 256 | 0.2591 | | 0.3338 | 17.0 | 272 | 0.2599 | | 0.2949 | 18.0 | 288 | 0.2618 | | 0.3035 | 19.0 | 304 | 0.2436 | | 0.3108 | 20.0 | 320 | 0.2015 | | 0.289 | 21.0 | 336 | 0.2329 | | 0.3144 | 22.0 | 352 | 0.1940 | | 0.2606 | 23.0 | 368 | 0.2334 | | 0.2842 | 24.0 | 384 | 0.1996 | | 0.2892 | 25.0 | 400 | 0.2330 | | 0.2612 | 26.0 | 416 | 0.2163 | | 0.2669 | 27.0 | 432 | 0.2053 | | 0.3147 | 28.0 | 448 | 0.1555 | | 0.286 | 29.0 | 464 | 0.1983 | | 0.2857 | 30.0 | 480 | 0.2346 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
Azura/data
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="email81227/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Azuris/DialoGPT-medium-senorita
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="email81227/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Azuris/DialoGPT-small-envy
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
2022-11-24T15:49:44Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="popolin52/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
BE/demo-sentiment2021
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v5 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="tomohiroliu/q-Taxi-v5", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
BJTK2/model_name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-11-24T15:58:38Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3705 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 3705, "warmup_steps": 371, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
BOON/electra-xlnet
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-xls-r-300m-nyanja-test_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-nyanja-test_v1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: inf - Wer: 0.4496 - Cer: 0.0940 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 2.9918 | 0.62 | 400 | inf | 1.0 | 1.0 | | 2.6572 | 1.24 | 800 | inf | 0.9958 | 0.4380 | | 1.2544 | 1.86 | 1200 | inf | 0.5640 | 0.1152 | | 0.7816 | 2.48 | 1600 | inf | 0.4496 | 0.0940 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.1+cu113 - Datasets 1.18.3 - Tokenizers 0.13.2
BSC-LT/roberta-base-biomedical-es
[ "pytorch", "roberta", "fill-mask", "es", "arxiv:2109.03570", "arxiv:2109.07765", "transformers", "biomedical", "spanish", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
161
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: convnext-tiny-224_album_vitVMMRdb_make_model_album_pred results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-tiny-224_album_vitVMMRdb_make_model_album_pred This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7021 - Accuracy: 0.8173 - Precision: 0.8094 - Recall: 0.8173 - F1: 0.8057 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:| | 4.6105 | 1.0 | 839 | 4.5248 | 0.1097 | 0.0579 | 0.1097 | 0.0403 | | 3.4711 | 2.0 | 1678 | 3.3162 | 0.3000 | 0.2302 | 0.3000 | 0.2097 | | 2.6202 | 3.0 | 2517 | 2.4445 | 0.4709 | 0.4120 | 0.4709 | 0.3939 | | 2.0614 | 4.0 | 3356 | 1.8839 | 0.5742 | 0.5389 | 0.5742 | 0.5168 | | 1.7026 | 5.0 | 4195 | 1.5247 | 0.6436 | 0.6180 | 0.6436 | 0.6013 | | 1.4288 | 6.0 | 5034 | 1.2768 | 0.6979 | 0.6810 | 0.6979 | 0.6686 | | 1.1953 | 7.0 | 5873 | 1.0960 | 0.7323 | 0.7218 | 0.7323 | 0.7077 | | 1.058 | 8.0 | 6712 | 0.9828 | 0.7548 | 0.7441 | 0.7548 | 0.7350 | | 0.9691 | 9.0 | 7551 | 0.9018 | 0.7718 | 0.7616 | 0.7718 | 0.7536 | | 0.8757 | 10.0 | 8390 | 0.8380 | 0.7893 | 0.7806 | 0.7893 | 0.7756 | | 0.8446 | 11.0 | 9229 | 0.7905 | 0.7982 | 0.7913 | 0.7982 | 0.7859 | | 0.7711 | 12.0 | 10068 | 0.7524 | 0.8069 | 0.7995 | 0.8069 | 0.7950 | | 0.7689 | 13.0 | 10907 | 0.7283 | 0.8123 | 0.8043 | 0.8123 | 0.8009 | | 0.6919 | 14.0 | 11746 | 0.7133 | 0.8148 | 0.8061 | 0.8148 | 0.8036 | | 0.694 | 15.0 | 12585 | 0.7064 | 0.8177 | 0.8089 | 0.8177 | 0.8067 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
BSC-LT/roberta-base-bne-capitel-ner
[ "pytorch", "roberta", "token-classification", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "capitel", "ner", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
This model is a fine-tuned version of google/mt5-small on the GermanQuAD dataset. This is a model for question generation from a text corpus. The following hyperparameters were used during training: - learning_rate: 1e-3 - mini_batch_size: 8 - optimizer: Adam - num_epochs: 4
BSC-LT/roberta-large-bne-capitel-pos
[ "pytorch", "roberta", "token-classification", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "capitel", "pos", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: train args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5327637463001902 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8221 - Matthews Correlation: 0.5328 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5238 | 1.0 | 535 | 0.5287 | 0.3943 | | 0.3462 | 2.0 | 1070 | 0.4960 | 0.4998 | | 0.2323 | 3.0 | 1605 | 0.5847 | 0.5016 | | 0.1788 | 4.0 | 2140 | 0.7807 | 0.5282 | | 0.1282 | 5.0 | 2675 | 0.8221 | 0.5328 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
BSen/wav2vec2-base-timit-demo-colab
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 --- This repo contains the \[emb\]-cross-encoder model -- the cross-encoder model which scores a query-item pair using dot-product of the contextualized query and item embeddings after jointly encoding the query-item pair. This model is used in the experiments for our EMNLP 2022 paper titled "[Efficient Nearest Neighbor Search for Cross-Encoder Models using Matrix Factorization](https://arxiv.org/pdf/2210.12579.pdf)". See [paper](https://arxiv.org/pdf/2210.12579.pdf) and/or [code](https://github.com/iesl/anncur) for more details about the model.
BatuhanYilmaz/bert-finetuned-ner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: sklearn tags: - sklearn - skops - tabular-classification widget: structuredData: sepal_length: - - 6.3 - - 6.5 - - 5.6 --- ### Linear Regression Model This Linear Regression model trained on Iris dataset that transformed to a structured array. Goal is to test this pr -> https://github.com/skops-dev/skops/pull/211
BatuhanYilmaz/distilbert-base-uncased-finetuned-squad-d5716d28
[ "pytorch", "distilbert", "fill-mask", "en", "dataset:squad", "arxiv:1910.01108", "transformers", "question-answering", "license:apache-2.0", "autotrain_compatible" ]
question-answering
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
18
2022-11-24T18:57:18Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2-parent results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8311904761904761 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.47058823529411764 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.47774480712166173 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5630906058921623 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.746 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4605263157894737 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.48148148148148145 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9193912912460449 - name: F1 (macro) type: f1_macro value: 0.9155232475170662 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8359154929577466 - name: F1 (macro) type: f1_macro value: 0.6451009241404899 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6587215601300108 - name: F1 (macro) type: f1_macro value: 0.6374469353477457 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9703693399179245 - name: F1 (macro) type: f1_macro value: 0.9060392695729131 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8846756502663742 - name: F1 (macro) type: f1_macro value: 0.8795670909805673 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2-parent RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2-parent/raw/main/analogy.json)): - Accuracy on SAT (full): 0.47058823529411764 - Accuracy on SAT: 0.47774480712166173 - Accuracy on BATS: 0.5630906058921623 - Accuracy on U2: 0.4605263157894737 - Accuracy on U4: 0.48148148148148145 - Accuracy on Google: 0.746 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2-parent/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9193912912460449 - Micro F1 score on CogALexV: 0.8359154929577466 - Micro F1 score on EVALution: 0.6587215601300108 - Micro F1 score on K&H+N: 0.9703693399179245 - Micro F1 score on ROOT09: 0.8846756502663742 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2-parent/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8311904761904761 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2-parent") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: parent The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2-parent/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
BatuhanYilmaz/dummy
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-11-24T19:05:15Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: textClass-finetuned-coba-coba results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # textClass-finetuned-coba-coba This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4974 - Accuracy: 0.7831 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5094 | 1.0 | 2757 | 0.4658 | 0.7746 | | 0.4474 | 2.0 | 5514 | 0.4490 | 0.7851 | | 0.402 | 3.0 | 8271 | 0.4619 | 0.7841 | | 0.3618 | 4.0 | 11028 | 0.4822 | 0.7831 | | 0.334 | 5.0 | 13785 | 0.4974 | 0.7831 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
BatuhanYilmaz/mt5-small-finetuned-amazonbooks-en-es
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-11-24T19:23:00Z
--- license: other tags: - computer_vision - pose_estimation --- Downloading via: https://github.com/DeepLabCut/DLClibrary & https://github.com/DeepLabCut/DeepLabCut (note thus, download count on HuggingFace not correct) Non-commercial use permitted. Copyright authors of Mathis, Biasi et al. WACV A pre-trained cat DeepLabCut network from Mathis et al. 2019 arXiv/WACV 2021. If you use this model, please cite our paper: https://arxiv.org/pdf/1909.11229.pdf
Baybars/wav2vec2-xls-r-300m-cv8-turkish
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "tr", "dataset:common_voice", "transformers", "common_voice", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2022-11-24T19:51:50Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-0-child results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8158333333333333 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3850267379679144 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3857566765578635 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4452473596442468 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.462 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.34649122807017546 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3263888888888889 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8225101702576465 - name: F1 (macro) type: f1_macro value: 0.8034809428019279 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7518779342723004 - name: F1 (macro) type: f1_macro value: 0.33329299540004015 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5411700975081257 - name: F1 (macro) type: f1_macro value: 0.4639671585144264 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8478820338039924 - name: F1 (macro) type: f1_macro value: 0.6768848309180341 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8307740520213099 - name: F1 (macro) type: f1_macro value: 0.8235960440777698 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-0-child RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-0-child/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3850267379679144 - Accuracy on SAT: 0.3857566765578635 - Accuracy on BATS: 0.4452473596442468 - Accuracy on U2: 0.34649122807017546 - Accuracy on U4: 0.3263888888888889 - Accuracy on Google: 0.462 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-0-child/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8225101702576465 - Micro F1 score on CogALexV: 0.7518779342723004 - Micro F1 score on EVALution: 0.5411700975081257 - Micro F1 score on K&H+N: 0.8478820338039924 - Micro F1 score on ROOT09: 0.8307740520213099 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-0-child/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8158333333333333 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-0-child") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: child The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-0-child/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
BeIR/query-gen-msmarco-t5-base-v1
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
1,816
null
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1-child results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8430952380952381 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3582887700534759 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3649851632047478 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4280155642023346 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.532 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3333333333333333 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3101851851851852 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8464667771583547 - name: F1 (macro) type: f1_macro value: 0.8314311734193423 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8084507042253521 - name: F1 (macro) type: f1_macro value: 0.5269777075808457 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6397616468039004 - name: F1 (macro) type: f1_macro value: 0.6161756853613614 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9094386867914029 - name: F1 (macro) type: f1_macro value: 0.7684752097244069 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8693199623942337 - name: F1 (macro) type: f1_macro value: 0.866286368957231 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1-child RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1-child/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3582887700534759 - Accuracy on SAT: 0.3649851632047478 - Accuracy on BATS: 0.4280155642023346 - Accuracy on U2: 0.3333333333333333 - Accuracy on U4: 0.3101851851851852 - Accuracy on Google: 0.532 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1-child/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8464667771583547 - Micro F1 score on CogALexV: 0.8084507042253521 - Micro F1 score on EVALution: 0.6397616468039004 - Micro F1 score on K&H+N: 0.9094386867914029 - Micro F1 score on ROOT09: 0.8693199623942337 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1-child/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8430952380952381 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1-child") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: child The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1-child/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
BeIR/query-gen-msmarco-t5-large-v1
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
1,225
2022-11-24T19:55:10Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2-child results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.6884126984126984 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35561497326203206 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.36795252225519287 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3952195664257921 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.468 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3333333333333333 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.31712962962962965 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8853397619406358 - name: F1 (macro) type: f1_macro value: 0.8737515262680274 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8082159624413146 - name: F1 (macro) type: f1_macro value: 0.5667219243609135 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6110509209100758 - name: F1 (macro) type: f1_macro value: 0.584000376416088 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9426166794185157 - name: F1 (macro) type: f1_macro value: 0.8355305513810347 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8727671576308367 - name: F1 (macro) type: f1_macro value: 0.8723842336449569 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2-child RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2-child/raw/main/analogy.json)): - Accuracy on SAT (full): 0.35561497326203206 - Accuracy on SAT: 0.36795252225519287 - Accuracy on BATS: 0.3952195664257921 - Accuracy on U2: 0.3333333333333333 - Accuracy on U4: 0.31712962962962965 - Accuracy on Google: 0.468 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2-child/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8853397619406358 - Micro F1 score on CogALexV: 0.8082159624413146 - Micro F1 score on EVALution: 0.6110509209100758 - Micro F1 score on K&H+N: 0.9426166794185157 - Micro F1 score on ROOT09: 0.8727671576308367 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2-child/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.6884126984126984 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2-child") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 5 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: child The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2-child/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
BeIR/sparta-msmarco-distilbert-base-v1
[ "pytorch", "distilbert", "feature-extraction", "arxiv:2009.13013", "arxiv:2104.08663", "transformers" ]
feature-extraction
{ "architectures": [ "DistilBertModel" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
106
null
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0-child results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8059920634920635 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.32620320855614976 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3323442136498516 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3507504168982768 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.382 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.33771929824561403 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35185185185185186 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7999095977098086 - name: F1 (macro) type: f1_macro value: 0.7684883952780684 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7530516431924884 - name: F1 (macro) type: f1_macro value: 0.34938339909910743 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.4739978331527627 - name: F1 (macro) type: f1_macro value: 0.3568206512908552 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8818946929122905 - name: F1 (macro) type: f1_macro value: 0.7088102566206993 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7790661234722657 - name: F1 (macro) type: f1_macro value: 0.759592502713887 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0-child RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0-child/raw/main/analogy.json)): - Accuracy on SAT (full): 0.32620320855614976 - Accuracy on SAT: 0.3323442136498516 - Accuracy on BATS: 0.3507504168982768 - Accuracy on U2: 0.33771929824561403 - Accuracy on U4: 0.35185185185185186 - Accuracy on Google: 0.382 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0-child/raw/main/classification.json)): - Micro F1 score on BLESS: 0.7999095977098086 - Micro F1 score on CogALexV: 0.7530516431924884 - Micro F1 score on EVALution: 0.4739978331527627 - Micro F1 score on K&H+N: 0.8818946929122905 - Micro F1 score on ROOT09: 0.7790661234722657 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0-child/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8059920634920635 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0-child") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: child The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0-child/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
BearThreat/distilbert-base-uncased-finetuned-cola
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
2022-11-24T19:58:24Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1-child results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7791666666666667 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.34759358288770054 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35311572700296734 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4102279043913285 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.562 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.39035087719298245 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3773148148148148 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.841795992165135 - name: F1 (macro) type: f1_macro value: 0.8175016337972725 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7873239436619718 - name: F1 (macro) type: f1_macro value: 0.4950289837305936 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5877573131094258 - name: F1 (macro) type: f1_macro value: 0.5255505321109658 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9225846838700703 - name: F1 (macro) type: f1_macro value: 0.8111729625447839 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8558445628329677 - name: F1 (macro) type: f1_macro value: 0.8564733796055889 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1-child RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1-child/raw/main/analogy.json)): - Accuracy on SAT (full): 0.34759358288770054 - Accuracy on SAT: 0.35311572700296734 - Accuracy on BATS: 0.4102279043913285 - Accuracy on U2: 0.39035087719298245 - Accuracy on U4: 0.3773148148148148 - Accuracy on Google: 0.562 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1-child/raw/main/classification.json)): - Micro F1 score on BLESS: 0.841795992165135 - Micro F1 score on CogALexV: 0.7873239436619718 - Micro F1 score on EVALution: 0.5877573131094258 - Micro F1 score on K&H+N: 0.9225846838700703 - Micro F1 score on ROOT09: 0.8558445628329677 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1-child/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7791666666666667 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1-child") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 8 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: child The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1-child/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Bee-Garbs/DialoGPT-real-cartman-small
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-0-child results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7053968253968254 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3449197860962567 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.34421364985163205 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5130628126737076 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.572 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.39035087719298245 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3819444444444444 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.832906433629652 - name: F1 (macro) type: f1_macro value: 0.8065752966322005 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7967136150234742 - name: F1 (macro) type: f1_macro value: 0.5088521823294603 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5509209100758397 - name: F1 (macro) type: f1_macro value: 0.4913066970113474 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9061000208666621 - name: F1 (macro) type: f1_macro value: 0.7609290356933636 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8398621121905359 - name: F1 (macro) type: f1_macro value: 0.8229245515993938 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-0-child RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-0-child/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3449197860962567 - Accuracy on SAT: 0.34421364985163205 - Accuracy on BATS: 0.5130628126737076 - Accuracy on U2: 0.39035087719298245 - Accuracy on U4: 0.3819444444444444 - Accuracy on Google: 0.572 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-0-child/raw/main/classification.json)): - Micro F1 score on BLESS: 0.832906433629652 - Micro F1 score on CogALexV: 0.7967136150234742 - Micro F1 score on EVALution: 0.5509209100758397 - Micro F1 score on K&H+N: 0.9061000208666621 - Micro F1 score on ROOT09: 0.8398621121905359 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-0-child/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7053968253968254 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-0-child") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 2 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: child The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-0-child/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Beelow/wav2vec2-ukrainian-model-large
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1-child results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7567460317460317 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3449197860962567 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35014836795252224 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4252362423568649 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.59 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35964912280701755 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.33796296296296297 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8695193611571493 - name: F1 (macro) type: f1_macro value: 0.8568674733966278 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7643192488262912 - name: F1 (macro) type: f1_macro value: 0.48540339382722264 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5525460455037919 - name: F1 (macro) type: f1_macro value: 0.4851738190720077 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9296793489601447 - name: F1 (macro) type: f1_macro value: 0.8229079543242852 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8712002507051081 - name: F1 (macro) type: f1_macro value: 0.8695492693223117 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1-child RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1-child/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3449197860962567 - Accuracy on SAT: 0.35014836795252224 - Accuracy on BATS: 0.4252362423568649 - Accuracy on U2: 0.35964912280701755 - Accuracy on U4: 0.33796296296296297 - Accuracy on Google: 0.59 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1-child/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8695193611571493 - Micro F1 score on CogALexV: 0.7643192488262912 - Micro F1 score on EVALution: 0.5525460455037919 - Micro F1 score on K&H+N: 0.9296793489601447 - Micro F1 score on ROOT09: 0.8712002507051081 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1-child/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7567460317460317 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1-child") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 6 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: child The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1-child/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Belin/T5-Terms-and-Conditions
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-0-child results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.6966865079365079 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.40641711229946526 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4065281899109792 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.603112840466926 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.764 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.40350877192982454 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.40046296296296297 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.854452312791924 - name: F1 (macro) type: f1_macro value: 0.8335614299635147 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7906103286384977 - name: F1 (macro) type: f1_macro value: 0.46109731160888645 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6310942578548212 - name: F1 (macro) type: f1_macro value: 0.6269464689917221 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9136815747374278 - name: F1 (macro) type: f1_macro value: 0.7987746285783167 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8260733312441241 - name: F1 (macro) type: f1_macro value: 0.8284737550173958 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-0-child RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-0-child/raw/main/analogy.json)): - Accuracy on SAT (full): 0.40641711229946526 - Accuracy on SAT: 0.4065281899109792 - Accuracy on BATS: 0.603112840466926 - Accuracy on U2: 0.40350877192982454 - Accuracy on U4: 0.40046296296296297 - Accuracy on Google: 0.764 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-0-child/raw/main/classification.json)): - Micro F1 score on BLESS: 0.854452312791924 - Micro F1 score on CogALexV: 0.7906103286384977 - Micro F1 score on EVALution: 0.6310942578548212 - Micro F1 score on K&H+N: 0.9136815747374278 - Micro F1 score on ROOT09: 0.8260733312441241 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-0-child/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.6966865079365079 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-0-child") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 2 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: child The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-0-child/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Bella4322/Sarah
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-11-24T20:08:36Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1-child results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7853174603174603 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4197860962566845 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.42433234421364985 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5619788771539744 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.744 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.43859649122807015 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4351851851851852 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8716287479282808 - name: F1 (macro) type: f1_macro value: 0.8587076704516358 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8446009389671362 - name: F1 (macro) type: f1_macro value: 0.623737694543065 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.676056338028169 - name: F1 (macro) type: f1_macro value: 0.6565825451079605 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9460944564234541 - name: F1 (macro) type: f1_macro value: 0.8418727845254599 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8799749294891883 - name: F1 (macro) type: f1_macro value: 0.8734406634484763 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1-child RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1-child/raw/main/analogy.json)): - Accuracy on SAT (full): 0.4197860962566845 - Accuracy on SAT: 0.42433234421364985 - Accuracy on BATS: 0.5619788771539744 - Accuracy on U2: 0.43859649122807015 - Accuracy on U4: 0.4351851851851852 - Accuracy on Google: 0.744 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1-child/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8716287479282808 - Micro F1 score on CogALexV: 0.8446009389671362 - Micro F1 score on EVALution: 0.676056338028169 - Micro F1 score on K&H+N: 0.9460944564234541 - Micro F1 score on ROOT09: 0.8799749294891883 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1-child/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7853174603174603 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1-child") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: child The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1-child/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
BenDavis71/GPT-2-Finetuning-AIRaid
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2-child results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.6858134920634921 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.40641711229946526 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4065281899109792 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6520289049471929 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.814 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4298245614035088 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4375 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8256742504143438 - name: F1 (macro) type: f1_macro value: 0.7997907403301278 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8183098591549296 - name: F1 (macro) type: f1_macro value: 0.5751411013000909 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6413867822318526 - name: F1 (macro) type: f1_macro value: 0.6303562723873062 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8593586979202894 - name: F1 (macro) type: f1_macro value: 0.6700136749243296 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8583516139141335 - name: F1 (macro) type: f1_macro value: 0.8571706539074961 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2-child RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2-child/raw/main/analogy.json)): - Accuracy on SAT (full): 0.40641711229946526 - Accuracy on SAT: 0.4065281899109792 - Accuracy on BATS: 0.6520289049471929 - Accuracy on U2: 0.4298245614035088 - Accuracy on U4: 0.4375 - Accuracy on Google: 0.814 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2-child/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8256742504143438 - Micro F1 score on CogALexV: 0.8183098591549296 - Micro F1 score on EVALution: 0.6413867822318526 - Micro F1 score on K&H+N: 0.8593586979202894 - Micro F1 score on ROOT09: 0.8583516139141335 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2-child/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.6858134920634921 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2-child") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 10 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: child The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2-child/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
BenQLange/HF_bot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1-child results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7030555555555555 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3850267379679144 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3916913946587537 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5269594219010562 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.37280701754385964 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.40046296296296297 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.835317161368088 - name: F1 (macro) type: f1_macro value: 0.8283163898192295 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7708920187793428 - name: F1 (macro) type: f1_macro value: 0.40683683267154375 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6040086673889491 - name: F1 (macro) type: f1_macro value: 0.562590771697943 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8937886902691798 - name: F1 (macro) type: f1_macro value: 0.7550347133400666 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8448762143528674 - name: F1 (macro) type: f1_macro value: 0.8407765599818559 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1-child RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1-child/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3850267379679144 - Accuracy on SAT: 0.3916913946587537 - Accuracy on BATS: 0.5269594219010562 - Accuracy on U2: 0.37280701754385964 - Accuracy on U4: 0.40046296296296297 - Accuracy on Google: 0.6 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1-child/raw/main/classification.json)): - Micro F1 score on BLESS: 0.835317161368088 - Micro F1 score on CogALexV: 0.7708920187793428 - Micro F1 score on EVALution: 0.6040086673889491 - Micro F1 score on K&H+N: 0.8937886902691798 - Micro F1 score on ROOT09: 0.8448762143528674 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1-child/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7030555555555555 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1-child") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 8 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: child The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1-child/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
BenWitter/DialoGPT-small-Tyrion
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
2022-11-24T20:13:54Z
--- language: en thumbnail: http://www.huggingtweets.com/parker_gibbons/1669320881340/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1590137629219004416/6Pj98wZW_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">parker gibbons</div> <div style="text-align: center; font-size: 14px;">@parker_gibbons</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from parker gibbons. | Data | parker gibbons | | --- | --- | | Tweets downloaded | 3165 | | Retweets | 972 | | Short tweets | 234 | | Tweets kept | 1959 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3vt4m93y/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @parker_gibbons's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/8lgj2jge) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/8lgj2jge/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/parker_gibbons') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Benicio/t5-small-finetuned-en-to-ro
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-child results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.6546230158730159 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.31016042780748665 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.32047477744807124 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4558087826570317 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.552 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.33771929824561403 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.36342592592592593 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7158354678318517 - name: F1 (macro) type: f1_macro value: 0.6580573706656033 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7342723004694836 - name: F1 (macro) type: f1_macro value: 0.2592182558244037 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.4431202600216685 - name: F1 (macro) type: f1_macro value: 0.2667711261353617 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.865201363288586 - name: F1 (macro) type: f1_macro value: 0.6765044508427398 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.727984957693513 - name: F1 (macro) type: f1_macro value: 0.6461380162719604 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-child RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-child/raw/main/analogy.json)): - Accuracy on SAT (full): 0.31016042780748665 - Accuracy on SAT: 0.32047477744807124 - Accuracy on BATS: 0.4558087826570317 - Accuracy on U2: 0.33771929824561403 - Accuracy on U4: 0.36342592592592593 - Accuracy on Google: 0.552 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-child/raw/main/classification.json)): - Micro F1 score on BLESS: 0.7158354678318517 - Micro F1 score on CogALexV: 0.7342723004694836 - Micro F1 score on EVALution: 0.4431202600216685 - Micro F1 score on K&H+N: 0.865201363288586 - Micro F1 score on ROOT09: 0.727984957693513 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-child/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.6546230158730159 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-child") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 8 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: child The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-child/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Beri/legal-qa
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2022-11-24T20:18:59Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1-parent results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8430952380952381 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3582887700534759 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3649851632047478 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4280155642023346 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.532 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3333333333333333 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3101851851851852 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8464667771583547 - name: F1 (macro) type: f1_macro value: 0.8314311734193423 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8084507042253521 - name: F1 (macro) type: f1_macro value: 0.5269777075808457 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6397616468039004 - name: F1 (macro) type: f1_macro value: 0.6161756853613614 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9094386867914029 - name: F1 (macro) type: f1_macro value: 0.7684752097244069 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8693199623942337 - name: F1 (macro) type: f1_macro value: 0.866286368957231 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1-parent RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1-parent/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3582887700534759 - Accuracy on SAT: 0.3649851632047478 - Accuracy on BATS: 0.4280155642023346 - Accuracy on U2: 0.3333333333333333 - Accuracy on U4: 0.3101851851851852 - Accuracy on Google: 0.532 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1-parent/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8464667771583547 - Micro F1 score on CogALexV: 0.8084507042253521 - Micro F1 score on EVALution: 0.6397616468039004 - Micro F1 score on K&H+N: 0.9094386867914029 - Micro F1 score on ROOT09: 0.8693199623942337 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1-parent/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8430952380952381 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1-parent") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: parent The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1-parent/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Berzemu/Coco
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0-parent results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8059920634920635 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.32620320855614976 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3323442136498516 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3507504168982768 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.382 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.33771929824561403 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35185185185185186 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7999095977098086 - name: F1 (macro) type: f1_macro value: 0.7684883952780684 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7530516431924884 - name: F1 (macro) type: f1_macro value: 0.34938339909910743 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.4739978331527627 - name: F1 (macro) type: f1_macro value: 0.3568206512908552 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8818946929122905 - name: F1 (macro) type: f1_macro value: 0.7088102566206993 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7790661234722657 - name: F1 (macro) type: f1_macro value: 0.759592502713887 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0-parent RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0-parent/raw/main/analogy.json)): - Accuracy on SAT (full): 0.32620320855614976 - Accuracy on SAT: 0.3323442136498516 - Accuracy on BATS: 0.3507504168982768 - Accuracy on U2: 0.33771929824561403 - Accuracy on U4: 0.35185185185185186 - Accuracy on Google: 0.382 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0-parent/raw/main/classification.json)): - Micro F1 score on BLESS: 0.7999095977098086 - Micro F1 score on CogALexV: 0.7530516431924884 - Micro F1 score on EVALution: 0.4739978331527627 - Micro F1 score on K&H+N: 0.8818946929122905 - Micro F1 score on ROOT09: 0.7790661234722657 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0-parent/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8059920634920635 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0-parent") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: parent The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0-parent/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
BhanuSama/gpt2-finetuned-xsum
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: shreya_sentence_truth_predictor2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # shreya_sentence_truth_predictor2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8314 - Accuracy: 0.8915 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0919 | 1.0 | 875 | 0.6681 | 0.8975 | | 0.0483 | 2.0 | 1750 | 0.8296 | 0.8885 | | 0.04 | 3.0 | 2625 | 0.8314 | 0.8915 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Tokenizers 0.13.2
Bharathdamu/wav2vec2-large-xls-r-300m-hindi2-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-11-24T20:29:17Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1-parent results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7567460317460317 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3449197860962567 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35014836795252224 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4252362423568649 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.59 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35964912280701755 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.33796296296296297 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8695193611571493 - name: F1 (macro) type: f1_macro value: 0.8568674733966278 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7643192488262912 - name: F1 (macro) type: f1_macro value: 0.48540339382722264 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5525460455037919 - name: F1 (macro) type: f1_macro value: 0.4851738190720077 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9296793489601447 - name: F1 (macro) type: f1_macro value: 0.8229079543242852 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8712002507051081 - name: F1 (macro) type: f1_macro value: 0.8695492693223117 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1-parent RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1-parent/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3449197860962567 - Accuracy on SAT: 0.35014836795252224 - Accuracy on BATS: 0.4252362423568649 - Accuracy on U2: 0.35964912280701755 - Accuracy on U4: 0.33796296296296297 - Accuracy on Google: 0.59 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1-parent/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8695193611571493 - Micro F1 score on CogALexV: 0.7643192488262912 - Micro F1 score on EVALution: 0.5525460455037919 - Micro F1 score on K&H+N: 0.9296793489601447 - Micro F1 score on ROOT09: 0.8712002507051081 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1-parent/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7567460317460317 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1-parent") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 6 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: parent The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1-parent/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Bharathdamu/wav2vec2-large-xls-r-300m-hindi3-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2-parent results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.6386706349206349 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.32887700534759357 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3323442136498516 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.44858254585881047 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.474 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.34210526315789475 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.32407407407407407 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.776254331776405 - name: F1 (macro) type: f1_macro value: 0.743956464512583 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.753755868544601 - name: F1 (macro) type: f1_macro value: 0.32771443150653473 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.4284940411700975 - name: F1 (macro) type: f1_macro value: 0.2369468752324584 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8608193642623635 - name: F1 (macro) type: f1_macro value: 0.6343956091240253 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7643371983704167 - name: F1 (macro) type: f1_macro value: 0.7154904519234845 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2-parent RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2-parent/raw/main/analogy.json)): - Accuracy on SAT (full): 0.32887700534759357 - Accuracy on SAT: 0.3323442136498516 - Accuracy on BATS: 0.44858254585881047 - Accuracy on U2: 0.34210526315789475 - Accuracy on U4: 0.32407407407407407 - Accuracy on Google: 0.474 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2-parent/raw/main/classification.json)): - Micro F1 score on BLESS: 0.776254331776405 - Micro F1 score on CogALexV: 0.753755868544601 - Micro F1 score on EVALution: 0.4284940411700975 - Micro F1 score on K&H+N: 0.8608193642623635 - Micro F1 score on ROOT09: 0.7643371983704167 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2-parent/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.6386706349206349 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2-parent") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 10 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: parent The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2-parent/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Bhumika/roberta-base-finetuned-sst2
[ "pytorch", "tensorboard", "roberta", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "model-index" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
85
2022-11-24T20:35:57Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2-parent results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.6858134920634921 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.40641711229946526 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4065281899109792 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6520289049471929 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.814 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4298245614035088 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4375 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8256742504143438 - name: F1 (macro) type: f1_macro value: 0.7997907403301278 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8183098591549296 - name: F1 (macro) type: f1_macro value: 0.5751411013000909 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6413867822318526 - name: F1 (macro) type: f1_macro value: 0.6303562723873062 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8593586979202894 - name: F1 (macro) type: f1_macro value: 0.6700136749243296 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8583516139141335 - name: F1 (macro) type: f1_macro value: 0.8571706539074961 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2-parent RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2-parent/raw/main/analogy.json)): - Accuracy on SAT (full): 0.40641711229946526 - Accuracy on SAT: 0.4065281899109792 - Accuracy on BATS: 0.6520289049471929 - Accuracy on U2: 0.4298245614035088 - Accuracy on U4: 0.4375 - Accuracy on Google: 0.814 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2-parent/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8256742504143438 - Micro F1 score on CogALexV: 0.8183098591549296 - Micro F1 score on EVALution: 0.6413867822318526 - Micro F1 score on K&H+N: 0.8593586979202894 - Micro F1 score on ROOT09: 0.8583516139141335 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2-parent/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.6858134920634921 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2-parent") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 10 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: parent The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2-parent/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Bia18/Beatriz
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-11-24T20:37:42Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-0-parent results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.38442460317460314 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.23529411764705882 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.2314540059347181 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3229571984435798 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.384 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3157894736842105 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.2847222222222222 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6522525237306012 - name: F1 (macro) type: f1_macro value: 0.616560269476982 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7183098591549296 - name: F1 (macro) type: f1_macro value: 0.16833503884438658 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.42632719393282775 - name: F1 (macro) type: f1_macro value: 0.28678399596569476 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8141475968560896 - name: F1 (macro) type: f1_macro value: 0.6286243048790003 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6552804763397054 - name: F1 (macro) type: f1_macro value: 0.5562839421136045 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-0-parent RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-0-parent/raw/main/analogy.json)): - Accuracy on SAT (full): 0.23529411764705882 - Accuracy on SAT: 0.2314540059347181 - Accuracy on BATS: 0.3229571984435798 - Accuracy on U2: 0.3157894736842105 - Accuracy on U4: 0.2847222222222222 - Accuracy on Google: 0.384 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-0-parent/raw/main/classification.json)): - Micro F1 score on BLESS: 0.6522525237306012 - Micro F1 score on CogALexV: 0.7183098591549296 - Micro F1 score on EVALution: 0.42632719393282775 - Micro F1 score on K&H+N: 0.8141475968560896 - Micro F1 score on ROOT09: 0.6552804763397054 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-0-parent/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.38442460317460314 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-0-parent") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 2 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: parent The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-0-parent/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Biasface/DDDC2
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2022-11-24T20:41:06Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-parent results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.6546230158730159 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.31016042780748665 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.32047477744807124 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4558087826570317 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.552 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.33771929824561403 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.36342592592592593 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7158354678318517 - name: F1 (macro) type: f1_macro value: 0.6580573706656033 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7342723004694836 - name: F1 (macro) type: f1_macro value: 0.2592182558244037 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.4431202600216685 - name: F1 (macro) type: f1_macro value: 0.2667711261353617 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.865201363288586 - name: F1 (macro) type: f1_macro value: 0.6765044508427398 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.727984957693513 - name: F1 (macro) type: f1_macro value: 0.6461380162719604 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-parent RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-parent/raw/main/analogy.json)): - Accuracy on SAT (full): 0.31016042780748665 - Accuracy on SAT: 0.32047477744807124 - Accuracy on BATS: 0.4558087826570317 - Accuracy on U2: 0.33771929824561403 - Accuracy on U4: 0.36342592592592593 - Accuracy on Google: 0.552 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-parent/raw/main/classification.json)): - Micro F1 score on BLESS: 0.7158354678318517 - Micro F1 score on CogALexV: 0.7342723004694836 - Micro F1 score on EVALution: 0.4431202600216685 - Micro F1 score on K&H+N: 0.865201363288586 - Micro F1 score on ROOT09: 0.727984957693513 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-parent/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.6546230158730159 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-parent") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 8 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: parent The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-parent/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
BigDaddyNe1L/Hhaa
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-11-24T20:49:24Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-xls-r-300m-tonga-test_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-tonga-test_v2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4784 - Wer: 0.3887 - Cer: 0.0940 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 6.6795 | 3.62 | 500 | 2.9371 | 1.0 | 1.0 | | 2.9201 | 7.25 | 1000 | 2.8427 | 1.0 | 1.0 | | 2.6938 | 10.87 | 1500 | 1.6109 | 1.0 | 0.3506 | | 1.1605 | 14.49 | 2000 | 0.6472 | 0.5433 | 0.1229 | | 0.7598 | 18.12 | 2500 | 0.5543 | 0.4611 | 0.1092 | | 0.6529 | 21.74 | 3000 | 0.5202 | 0.4289 | 0.1026 | | 0.5816 | 25.36 | 3500 | 0.4931 | 0.4093 | 0.0981 | | 0.5476 | 28.98 | 4000 | 0.4916 | 0.4041 | 0.0978 | | 0.517 | 32.61 | 4500 | 0.4765 | 0.3948 | 0.0951 | | 0.5025 | 36.23 | 5000 | 0.4812 | 0.3879 | 0.0942 | | 0.4879 | 39.85 | 5500 | 0.4784 | 0.3887 | 0.0940 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.1+cu113 - Datasets 1.18.3 - Tokenizers 0.13.2
BigSalmon/BertaMyWorda
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: mit tags: - generated_from_trainer model-index: - name: music_CLM results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # music_CLM This model is a fine-tuned version of [bernhardtandy/music_CLM](https://huggingface.co/bernhardtandy/music_CLM) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.5316 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
BigSalmon/InformalToFormalLincoln19
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
2022-11-24T21:02:32Z
--- license: unknown --- # Cables Retake finetuned style Model Produced from publicly available pictures in 576x704 format. Two models available. One trained using kohya_ss train_db script and the other one using diffusers_fine_tuning sctipt. ## Using the model * common subject prompt tokens: `cables retake artstyle <wathever>` ## Example prompts `cables retake artstyle smiling woman`:
BigSalmon/InformalToFormalLincoln20
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: - hi license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small Jsun Hi - Jiping results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 args: 'config: hi, split: test' metrics: - name: Wer type: wer value: 31.761618555828324 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Jsun Hi - Jiping This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2775 - Wer: 31.7616 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2092 | 0.61 | 1000 | 0.3201 | 38.7666 | | 0.1106 | 1.22 | 2000 | 0.2810 | 34.1023 | | 0.1049 | 1.83 | 3000 | 0.2660 | 32.4812 | | 0.052 | 2.45 | 4000 | 0.2775 | 31.7616 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.0+cu102 - Datasets 2.6.1 - Tokenizers 0.13.1
BigSalmon/InformalToFormalLincoln22
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: other tags: - computer_vision - pose_estimation --- Non-commercial use permitted. Copyright authors of Mathis, Biasi et al. WACV Downloading via: https://github.com/DeepLabCut/DLClibrary & https://github.com/DeepLabCut/DeepLabCut (note thus, download count on HuggingFace not correct) A pre-trained dog DeepLabCut network from Mathis et al. 2019 arXiv/WACV 2021. If you use this model, please cite our paper: https://arxiv.org/pdf/1909.11229.pdf
BigSalmon/InformalToFormalLincoln23
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2022-11-24T21:10:32Z
--- language: - en - zh - multilingual tags: - Image-to-Text - OCR - Image-Captioning - Text-Recognition datasets: - priyank-m/text_recognition_en_zh_clean metrics: - cer --- Multilingual OCR (m_OCR) is a VisionEncoderDecoder model based on the concept of TrOCR for English and Chinese document text-recognition. It uses a pre-trained Vision encoder and a pre-trained Language model as decoder. Encoder model used: facebook/vit-mae-large Decoder model used: xlm-roberta-base Notes and observations: 1. TrOCR used the open source trained models but also mentions that it was trained for the text-recognition task as pre-training on 684 Million samples. There was a second stage training where additional data was used. 2. TrOCR was pre-trained using 32 V100 GPUs having 32GB memory and 8 V100 GPUs for fine-tuning. Batch size was 2,048 and learning rate was 5e-5 3. The diagram on the paper is a bit misleading as the image is first resized and then divided into 16x16 patches, but the diagram did not show re-sizing. 4. First idea was to use DiT as it was trained on 41 Million document images, which could have provided a good boost to the performance in theory, but the performance was extremely bad so discarded the model. 5. Tried to use several other models, but not all models fit with each other, the VisionEncoderDecoder model throws error. 6. Another idea was to use Bloom as it was recently released at the time of writing but it actullay requires a value to be passed to indicate which language we are processing, therefore not suitable for building a multi-lingual OCR. 7. The models which worked best together for me were ViT and Roberta. 8. TrOCR paper did not mention what happens if you use large Vision model and a base Language model, mOCR model uses this configuration. 9. Large amount of data covering a wide variety of variations is required to get a good performance. Trained mOCR on 200K dataset and the performance was very poor. Training the model on approx 1.4 Million samples increased the performance to good levels. 10. Using large datasets, for example close to 1 Million samples, starts posing additional difficulties in downloading and uploading datasets and even cleaning becomes quite slow if using only free resources on the internet. 11. Using set_transform function to transform the samples on-the-fly was a good idea as it didn't require to save the transformed dataset. 12. Streaming dataset might be another good option if the dataset size were to increase any further. 13. Free GPU on colab seem not enough for this experiment, as keeping two models in GPU and training forces to keep batch size small and also the free GPUs (T4) are not fast enough. 14. A very important data cleaning step was to just check if the sample image and text can be converted to the input format expected by the model, the text should be non-empty value when converted back from the input IDs to text (some characters are not identified by the tokenizer and get converted to special token and we usually skip the special tokens when converting input IDs back to text) as it is required to be non-empty while doing the CER calculation. 15. Resuming model training was taking almost 1 or sometimes 2 hours in just skipping the batches, to avoid this wastage one possible solution would be to shuffle the training dataset before starting the training and then avoid the skipping of batches. This would be particularly useful when we increse the dataset size further.
BigSalmon/InformalToFormalLincoln24
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2022-11-24T21:25:03Z
--- license: creativeml-openrail-m tags: - text-to-image --- ### estelle-sims-style on Stable Diffusion via Dreambooth #### model by estelleflores This your the Stable Diffusion model fine-tuned the estelle-sims-style concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **3D render from a videogame in sks style** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/estelle-sims-style/resolve/main/concept_images/1.jpeg) ![image 1](https://huggingface.co/sd-dreambooth-library/estelle-sims-style/resolve/main/concept_images/0.jpeg) ![image 2](https://huggingface.co/sd-dreambooth-library/estelle-sims-style/resolve/main/concept_images/5.jpeg) ![image 3](https://huggingface.co/sd-dreambooth-library/estelle-sims-style/resolve/main/concept_images/4.jpeg) ![image 4](https://huggingface.co/sd-dreambooth-library/estelle-sims-style/resolve/main/concept_images/3.jpeg) ![image 5](https://huggingface.co/sd-dreambooth-library/estelle-sims-style/resolve/main/concept_images/2.jpeg)
BigSalmon/Lincoln4
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
2022-11-24T21:39:40Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: reverent_franklin results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # reverent_franklin This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'alpha': 1, 'name': 'Unlikelihood', 'score_threshold': 0.00078}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'reverent_franklin', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/3csuo5ov
BigSalmon/MrLincoln
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2022-11-24T21:39:40Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: suspicious_noyce results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # suspicious_noyce This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'suspicious_noyce', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/21rsvjy5
BigSalmon/MrLincoln10
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2022-11-24T21:39:41Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: gifted_tesla results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gifted_tesla This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 3147 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'value_head_config': {'is_detached': False}}, 'path_or_name': 'gpt2'}, 'objective': {'alpha': 0.5, 'beta': 10, 'name': 'AWR'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 1024, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'gifted_tesla', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.001, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/188ouvdg
BigSalmon/MrLincoln11
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: competent_joliot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # competent_joliot This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 3147 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'value_head_config': {'is_detached': False}}, 'path_or_name': 'gpt2'}, 'objective': {'alpha': 1, 'beta': 10, 'name': 'AWR'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 1024, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'competent_joliot', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/uqnj0736
BigSalmon/MrLincoln12
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
2022-11-24T21:39:47Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: practical_bartik results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # practical_bartik This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'filter_threshold': 0.00078, 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'practical_bartik', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/2a1mfkas
BigSalmon/MrLincolnBerta
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: zh datasets: c2m inference: parameters: max_length: 108 num_return_sequences: 1 do_sample: True widget: - text: "晋太元中,武陵人捕鱼为业。缘溪行,忘路之远近。忽逢桃花林,夹岸数百步,中无杂树,芳草鲜美,落英缤纷。渔人甚异之,复前行,欲穷其林。林尽水源,便得一山,山有小口,仿佛若有光。便舍船,从口入。初极狭,才通人。复行数十步,豁然开朗。土地平旷,屋舍俨然,有良田、美池、桑竹之属。阡陌交通,鸡犬相闻。其中往来种作,男女衣着,悉如外人。黄发垂髫,并怡然自乐。" example_title: "桃花源记" - text: "往者不可谏,来者犹可追。" example_title: "来者犹可追" - text: "逝者如斯夫!不舍昼夜。" example_title: "逝者如斯夫" --- # 文言文 to 现代文 ## Model description ## How to use 使用 pipeline 调用模型: ```python >>> from transformers import pipeline >>> model_checkpoint = "supermy/c2m" >>> translator = pipeline("translation", model=model_checkpoint, num_return_sequences=1, max_length=52, truncation=True,) >>> translator("往者不可谏,来者犹可追。") [{'translation_text': '过 去 的 事 情 不能 劝 谏 , 未来 的 事 情 还 可以 追 回 来 。 如 果 过 去 的 事 情 不能 劝 谏 , 那 么 , 未来 的 事 情 还 可以 追 回 来 。 如 果 过 去 的 事 情'}] >>> translator("福兮祸所伏,祸兮福所倚。",do_sample=True) [{'translation_text': '幸 福 是 祸 患 所 隐 藏 的 , 灾 祸 是 福 祸 所 依 托 的 。 这 些 都 是 幸 福 所 依 托 的 。 这 些 都 是 幸 福 所 带 来 的 。 幸 福 啊 , 也 是 幸 福'}] >>> translator("成事不说,遂事不谏,既往不咎。", num_return_sequences=1,do_sample=True) [{'translation_text': '事 情 不 高 兴 , 事 情 不 劝 谏 , 过 去 的 事 就 不 会 责 怪 。 事 情 没 有 多 久 了 , 事 情 没 有 多 久 , 事 情 没 有 多 久 了 , 事 情 没 有 多'}] >>> translator("逝者如斯夫!不舍昼夜。",num_return_sequences=1,max_length=30) [{'translation_text': '逝 去 的 人 就 像 这 样 啊 , 不分 昼夜 地 去 追 赶 它 们 。 这 样 的 人 就 不 会 忘 记'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("supermy/c2m") model = AutoModelForSeq2SeqLM.from_pretrained("supermy/c2m") text = "用你喜欢的任何文本替换我。" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Training data 非常全的文言文(古文)-现代文平行语料,基本涵盖了大部分经典古籍著作。 原始爬取的数据是篇章级对齐,经过脚本分句(按照句号分号感叹号问号划分)以及人工校对,形成共计约96万句对。目录bitext下是文言文-现代文对齐的平行数据。此外,目录source下是文言文单语数据,target下是现代文单语数据,这两个目录下的文件内容按行对齐。 以下为数据统计信息。其中,短篇章中包括了《论语》、《孟子》、《左传》等篇幅较短的古籍,已和《资治通鉴》合并。 |书名|句数 |:--|:--| 短篇章和资治通鉴|348727 元史|21182 北史|25823 北书|10947 南史|13838 南齐书|13137 史记|17701 后汉书|17753 周书|14930 太平广记|59358 宋书|23794 宋史|77853 徐霞客游记|22750 新五代史|10147 新唐书|12359 旧五代史|11377 旧唐书|29185 明史|85179 晋书|21133 梁书|14318 水经注全|11630 汉书|37622 辽史|9278 金史|13758 陈书|7096 隋书|8204 魏书|28178 **总计**|**967257** 《短篇章和资治通鉴》中各书籍统计如下(此部分数据量不完全准确): |书名|句数 |:--|:--| 资治通鉴|7.95w 左传|1.09w 大学章句集注| 86 反经| 4211 公孙龙子| 73 管子| 6266 鬼谷子| 385 韩非子| 4325 淮南子| 2669 黄帝内经| 6162 皇帝四经| 243 将苑| 100 金刚经| 193 孔子家语| 138 老子| 398 了凡四训| 31 礼记| 4917 列子| 1735 六韬| 693 六祖坛经| 949 论语| 988 吕氏春秋| 2473 孟子| 1654 梦溪笔谈| 1280 墨子| 2921 千字文| 82 清史稿| 1604 三字经| 234 山海经| 919 伤寒论| 712 商君书| 916 尚书| 1048 世说新语| 3044 司马法| 132 搜神记| 1963 搜神后记| 540 素书| 61 孙膑兵法| 230 孙子兵法| 338 天工开物| 807 尉缭子| 226 文昌孝经| 194 文心雕龙| 1388 吴子| 136 孝经| 102 笑林广记| 1496 荀子| 3131 颜氏家训| 510 仪礼| 2495 易传| 711 逸周书| 1505 战国策| 3318 贞观政要| 1291 中庸| 206 周礼| 2026 周易| 460 庄子| 1698 百战奇略| 800 论衡| 1.19w 智囊|2165 罗织经|188 朱子家训|31 抱朴子|217 地藏经|547 国语|3841 容斋随笔|2921 幼学琼林|1372 三略|268 围炉夜话|387 冰鉴|120 如果您使用该语料库,请注明出处:https://github.com/NiuTrans/Classical-Modern 感谢为该语料库做出贡献的成员:丁佳鹏、杨文权、刘晓晴、曹润柘、罗应峰。 ``` ``` ## Training procedure 在英伟达16G显卡训练了 4 天整,共计68 次。 [文言文数据集](https://huggingface.co/datasets/supermy/Classical-Modern) 训练数据. Helsinki-NLP [Helsinki-NLP](Helsinki-NLP/opus-mt-zh-en) 模型: ``` ### entry and citation info ``` ```
BigSalmon/T5Salmon
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
6
2022-11-24T22:47:32Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-child-prototypical results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.6546230158730159 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.31016042780748665 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.32047477744807124 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4558087826570317 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.552 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.33771929824561403 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.36342592592592593 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7158354678318517 - name: F1 (macro) type: f1_macro value: 0.6580573706656033 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7342723004694836 - name: F1 (macro) type: f1_macro value: 0.2592182558244037 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.4431202600216685 - name: F1 (macro) type: f1_macro value: 0.2667711261353617 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.865201363288586 - name: F1 (macro) type: f1_macro value: 0.6765044508427398 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.727984957693513 - name: F1 (macro) type: f1_macro value: 0.6461380162719604 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-child-prototypical RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-child-prototypical/raw/main/analogy.json)): - Accuracy on SAT (full): 0.31016042780748665 - Accuracy on SAT: 0.32047477744807124 - Accuracy on BATS: 0.4558087826570317 - Accuracy on U2: 0.33771929824561403 - Accuracy on U4: 0.36342592592592593 - Accuracy on Google: 0.552 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-child-prototypical/raw/main/classification.json)): - Micro F1 score on BLESS: 0.7158354678318517 - Micro F1 score on CogALexV: 0.7342723004694836 - Micro F1 score on EVALution: 0.4431202600216685 - Micro F1 score on K&H+N: 0.865201363288586 - Micro F1 score on ROOT09: 0.727984957693513 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-child-prototypical/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.6546230158730159 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-child-prototypical") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 8 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None - data_level: child_prototypical The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2-child-prototypical/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
BigSalmon/T5Salmon2
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
13
2022-11-24T23:01:50Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: distilcamembert-cae-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilcamembert-cae-all This model is a fine-tuned version of [cmarkea/distilcamembert-base](https://huggingface.co/cmarkea/distilcamembert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6016 - Precision: 0.8510 - Recall: 0.8481 - F1: 0.8471 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 1.18 | 1.0 | 40 | 0.9901 | 0.6418 | 0.4557 | 0.2991 | | 0.8718 | 2.0 | 80 | 0.6938 | 0.7667 | 0.7468 | 0.7196 | | 0.4656 | 3.0 | 120 | 0.6928 | 0.8364 | 0.8354 | 0.8353 | | 0.2418 | 4.0 | 160 | 0.6008 | 0.8276 | 0.8228 | 0.8228 | | 0.1285 | 5.0 | 200 | 0.6016 | 0.8510 | 0.8481 | 0.8471 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
BigSalmon/TS3
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible", "has_space" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2022-11-24T23:05:39Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: finetuning-sentiment-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0 - Tokenizers 0.13.2
BigSalmon/prepositions
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: distilcamembert-cae-no-thinking results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilcamembert-cae-no-thinking This model is a fine-tuned version of [cmarkea/distilcamembert-base](https://huggingface.co/cmarkea/distilcamembert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5464 - Precision: 0.7959 - Recall: 0.7848 - F1: 0.7869 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 1.1607 | 1.0 | 40 | 0.9958 | 0.6444 | 0.4684 | 0.3248 | | 1.0099 | 2.0 | 80 | 0.9761 | 0.6090 | 0.5316 | 0.4480 | | 0.6294 | 3.0 | 120 | 0.6770 | 0.8067 | 0.7215 | 0.7542 | | 0.3294 | 4.0 | 160 | 0.5464 | 0.7959 | 0.7848 | 0.7869 | | 0.1986 | 5.0 | 200 | 0.5440 | 0.7882 | 0.7722 | 0.7785 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
BigTooth/DialoGPT-Megumin
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
2022-11-24T23:13:44Z
--- language: - en license: creativeml-openrail-m thumbnail: "https://huggingface.co/nitrosocke/Ghibli-Diffusion/resolve/main/images/ghibli-diffusion-thumbnail.jpg" tags: - stable-diffusion - text-to-image - image-to-image - diffusers --- ### Ghibli Diffusion This is the fine-tuned Stable Diffusion model trained on images from modern anime feature films from Studio Ghibli. Use the tokens **_ghibli style_** in your prompts for the effect. **If you enjoy my work and want to test new models before release, please consider supporting me** [![Become A Patreon](https://badgen.net/badge/become/a%20patron/F96854)](https://patreon.com/user?u=79196446) **Characters rendered with the model:** ![Characters Samples](https://huggingface.co/nitrosocke/Ghibli-Diffusion/resolve/main/images/ghibli-diffusion-samples-01s.jpg) **Cars and Animals rendered with the model:** ![Misc. Samples](https://huggingface.co/nitrosocke/Ghibli-Diffusion/resolve/main/images/ghibli-diffusion-samples-02s.jpg) **Landscapes rendered with the model:** ![Landscape 1](https://huggingface.co/nitrosocke/Ghibli-Diffusion/resolve/main/images/ghibli-diffusion-samples-03s.jpg) _ghibli style beautiful Caribbean beach tropical (sunset) - Negative prompt: soft blurry_ ![Landscape 2](https://huggingface.co/nitrosocke/Ghibli-Diffusion/resolve/main/images/ghibli-diffusion-samples-04s.jpg) _ghibli style ice field white mountains ((northern lights)) starry sky low horizon - Negative prompt: soft blurry_ #### Prompt and settings for the Strom Trooper: **ghibli style (storm trooper) Negative prompt: (bad anatomy)** _Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3450349066, Size: 512x704_ #### Prompt and settings for the VW Beetle: **ghibli style VW beetle Negative prompt: soft blurry** _Steps: 30, Sampler: Euler a, CFG scale: 7, Seed: 1529856912, Size: 704x512_ This model was trained using the diffusers based dreambooth training by ShivamShrirao using prior-preservation loss and the _train-text-encoder_ flag in 15.000 steps. <!-- ### Gradio We support a [Gradio](https://github.com/gradio-app/gradio) Web UI run redshift-diffusion: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/nitrosocke/Ghibli-Diffusion-Demo)--> ### 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](). ```python from diffusers import StableDiffusionPipeline import torch model_id = "nitrosocke/Ghibli-Diffusion" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "ghibli style magical princess with golden hair" image = pipe(prompt).images[0] image.save("./magical_princess.png") ``` ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)