modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
Ayham/xlnet_bert_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9265 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2101 - Accuracy: 0.9265 - F1 Score: 0.9265 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | No log | 1.0 | 250 | 0.3016 | 0.9095 | 0.9067 | | No log | 2.0 | 500 | 0.2101 | 0.9265 | 0.9265 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Ayham/xlnet_distilgpt2_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- language: ms --- # roberta-base-bahasa-cased Pretrained RoBERTa base language model for Malay. ## Pretraining Corpus `roberta-base-bahasa-cased` model was pretrained on ~400 miliion words. Below is list of data we trained on, 1. IIUM confession, https://github.com/huseinzol05/malay-dataset/tree/master/dumping/clean 2. local Instagram, https://github.com/huseinzol05/malay-dataset/tree/master/dumping/clean 3. local news, https://github.com/huseinzol05/malay-dataset/tree/master/dumping/clean 4. local parliament hansards, https://github.com/huseinzol05/malay-dataset/tree/master/dumping/clean 5. local research papers related to `kebudayaan`, `keagaaman` and `etnik`, https://github.com/huseinzol05/malay-dataset/tree/master/dumping/clean 6. local twitter, https://github.com/huseinzol05/malay-dataset/tree/master/dumping/clean 7. Malay Wattpad, https://github.com/huseinzol05/malay-dataset/tree/master/dumping/clean 8. Malay Wikipedia, https://github.com/huseinzol05/malay-dataset/tree/master/dumping/clean ## Pretraining details - All steps can reproduce from https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/roberta. ## Example using AutoModelWithLMHead ```python from transformers import AutoTokenizer, AutoModelForMaskedLM, pipeline model = AutoModelForMaskedLM.from_pretrained('mesolitica/roberta-base-bahasa-cased') tokenizer = AutoTokenizer.from_pretrained( 'mesolitica/roberta-base-bahasa-cased', do_lower_case = False, ) fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer) fill_mask('Permohonan Najib, anak untuk dengar isu perlembagaan <mask> .') ``` Output is, ```json [{'score': 0.3368818759918213, 'token': 746, 'token_str': ' negara', 'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan negara.'}, {'score': 0.09646568447351456, 'token': 598, 'token_str': ' Malaysia', 'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan Malaysia.'}, {'score': 0.029483484104275703, 'token': 3265, 'token_str': ' UMNO', 'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan UMNO.'}, {'score': 0.026470622047781944, 'token': 2562, 'token_str': ' parti', 'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan parti.'}, {'score': 0.023237623274326324, 'token': 391, 'token_str': ' ini', 'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan ini.'}] ```
Ayham/xlnetgpt2_xsum7
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 1704.47 +/- 175.74 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Ayoola/pytorch_model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_trainer model-index: - name: GPT-Neo_DnD_Control results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GPT-Neo_DnD_Control This model is a fine-tuned version of [PatrickTyBrown/GPT-Neo_DnD](https://huggingface.co/PatrickTyBrown/GPT-Neo_DnD) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 2.6518 - eval_runtime: 141.422 - eval_samples_per_second: 6.527 - eval_steps_per_second: 3.267 - epoch: 3.9 - step: 36000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Ayoola/wav2vec2-large-xlsr-turkish-demo-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Access to model clam004/emerg-intent-consistent-good-gpt2-xl-v2 is restricted and you are not in the authorized list. Visit https://huggingface.co/clam004/emerg-intent-consistent-good-gpt2-xl-v2 to ask for access.
Ayran/DialoGPT-medium-harry-1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-sroie results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-sroie This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.23.0.dev0 - Pytorch 1.8.1+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
Ayran/DialoGPT-small-gandalf
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- license: mit --- ### madhubani art on Stable Diffusion This is the `<madhubani-art>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `style`: ![<madhubani-art> 0](https://huggingface.co/sd-concepts-library/madhubani-art/resolve/main/concept_images/0.jpeg) ![<madhubani-art> 1](https://huggingface.co/sd-concepts-library/madhubani-art/resolve/main/concept_images/1.jpeg) ![<madhubani-art> 2](https://huggingface.co/sd-concepts-library/madhubani-art/resolve/main/concept_images/2.jpeg) ![<madhubani-art> 3](https://huggingface.co/sd-concepts-library/madhubani-art/resolve/main/concept_images/3.jpeg)
Ayran/DialoGPT-small-harry-potter-1-through-3
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - CartPole-v1 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: PPO results: - metrics: - type: mean_reward value: 208.80 +/- 135.81 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 --- # PPO Agent Playing CartPole-v1 This is a trained model of a PPO agent playing CartPole-v1. To learn to code your own PPO agent and train it Unit 8 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit8 # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'CartPole-v1' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'virtual_display': True 'repo_id': 'NithirojTripatarasit/ppo-CartPole-v1' 'batch_size': 512 'minibatch_size': 128} ```
Azaghast/GPT2-SCP-Descriptions
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: train args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5451837431775948 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5532 - Matthews Correlation: 0.5452 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5248 | 1.0 | 535 | 0.5479 | 0.3922 | | 0.3503 | 2.0 | 1070 | 0.5148 | 0.4822 | | 0.2386 | 3.0 | 1605 | 0.5532 | 0.5452 | | 0.1773 | 4.0 | 2140 | 0.6818 | 0.5282 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Azuris/DialoGPT-medium-senorita
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.5911706349206349 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3235294117647059 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.314540059347181 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4118954974986103 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.43 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.34649122807017546 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3125 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8142232936567726 - name: F1 (macro) type: f1_macro value: 0.7823150685401111 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7779342723004695 - name: F1 (macro) type: f1_macro value: 0.4495225434483775 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5357529794149513 - name: F1 (macro) type: f1_macro value: 0.45418166183928343 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8190164846630034 - name: F1 (macro) type: f1_macro value: 0.6465234410767566 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8834221247257913 - name: F1 (macro) type: f1_macro value: 0.8771202456083294 --- # relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3235294117647059 - Accuracy on SAT: 0.314540059347181 - Accuracy on BATS: 0.4118954974986103 - Accuracy on U2: 0.34649122807017546 - Accuracy on U4: 0.3125 - Accuracy on Google: 0.43 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8142232936567726 - Micro F1 score on CogALexV: 0.7779342723004695 - Micro F1 score on EVALution: 0.5357529794149513 - Micro F1 score on K&H+N: 0.8190164846630034 - Micro F1 score on ROOT09: 0.8834221247257913 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.5911706349206349 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <mask> - loss_function: nce_logout - classification_loss: True - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 24 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
BSC-LT/roberta-large-bne-capitel-ner
[ "pytorch", "roberta", "token-classification", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "capitel", "ner", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - huggingface/autotrain-data-imdb-full co2_eq_emissions: emissions: 74.06131825522797 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1373552879 - CO2 Emissions (in grams): 74.0613 ## Validation Metrics - Loss: 0.193 - Accuracy: 0.933 - Precision: 0.941 - Recall: 0.923 - AUC: 0.980 - F1: 0.932 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/huggingface/autotrain-imdb-full-1373552879 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("huggingface/autotrain-imdb-full-1373552879", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("huggingface/autotrain-imdb-full-1373552879", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Badr/model1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: bsd --- a lightweight solution for the Kaggle ELL competition using distilbert Info about the Kaggle ELL competition: <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/code">https://www.kaggle.com/competitions/feedback-prize-english-language-learning/code</a>
Bagus/ser-japanese
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
- ./gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf.tar.xz |Filename | URL for downloading| comment| |-------|-----| ---| |./gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf.tar.xz | <https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-a/downloads/8-3-2019-03> | <https://developer.arm.com/downloads/-/gnu-a> |
BaptisteDoyen/camembert-base-xnli
[ "pytorch", "tf", "camembert", "text-classification", "fr", "dataset:xnli", "transformers", "zero-shot-classification", "xnli", "nli", "license:mit", "has_space" ]
zero-shot-classification
{ "architectures": [ "CamembertForSequenceClassification" ], "model_type": "camembert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
405,474
2022-09-07T13:15:31Z
--- license: cc-by-4.0 --- Install Instructions 1. Download Model into Google Drive > AI > DiscoDiffusion > Models 2. Add path '/content/drive/MyDrive/AI/DiscoDiffusion/Models/AIDM_130k_v01.pt' to Disco Diffusion Step 2 > Custom Model > Custom Path 3. In Custom Model Settings add the following code below 4. Run All Custom Model Settings --- #@markdown ####**Custom Model Settings:** if diffusion_model == 'custom': model_config.update({ 'attention_resolutions': '32, 16, 8', 'class_cond': False, 'diffusion_steps': 1000, 'rescale_timesteps': True, 'image_size': 512, 'learn_sigma': True, 'noise_schedule': 'linear', 'num_channels': 128, 'num_heads': 4, 'num_res_blocks': 2, 'resblock_updown': True, 'use_checkpoint': use_checkpoint, 'use_fp16': True, 'use_scale_shift_norm': True, })
BatuhanYilmaz/bert-finetuned-nerxD
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - huggingface/autotrain-data-emotions co2_eq_emissions: emissions: 0.05402221758817422 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1374752887 - CO2 Emissions (in grams): 0.0540 ## Validation Metrics - Loss: 0.122 - Accuracy: 0.940 - Macro F1: 0.912 - Micro F1: 0.940 - Weighted F1: 0.940 - Macro Precision: 0.902 - Micro Precision: 0.940 - Weighted Precision: 0.943 - Macro Recall: 0.928 - Micro Recall: 0.940 - Weighted Recall: 0.940 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/huggingface/autotrain-emotions-1374752887 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("huggingface/autotrain-emotions-1374752887", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("huggingface/autotrain-emotions-1374752887", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
BatuhanYilmaz/code-search-net-tokenizer1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - huggingface/autotrain-data-emotions co2_eq_emissions: emissions: 0.030012388645982102 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1374752888 - CO2 Emissions (in grams): 0.0300 ## Validation Metrics - Loss: 0.136 - Accuracy: 0.938 - Macro F1: 0.901 - Micro F1: 0.938 - Weighted F1: 0.937 - Macro Precision: 0.923 - Micro Precision: 0.938 - Weighted Precision: 0.938 - Macro Recall: 0.888 - Micro Recall: 0.938 - Weighted Recall: 0.938 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/huggingface/autotrain-emotions-1374752888 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("huggingface/autotrain-emotions-1374752888", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("huggingface/autotrain-emotions-1374752888", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
BatuhanYilmaz/distilbert-base-uncased-finetuned-squad-d5716d28
[ "pytorch", "distilbert", "fill-mask", "en", "dataset:squad", "arxiv:1910.01108", "transformers", "question-answering", "license:apache-2.0", "autotrain_compatible" ]
question-answering
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
18
null
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - huggingface/autotrain-data-emotions co2_eq_emissions: emissions: 5.378843181503548 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1374752889 - CO2 Emissions (in grams): 5.3788 ## Validation Metrics - Loss: 0.136 - Accuracy: 0.938 - Macro F1: 0.903 - Micro F1: 0.938 - Weighted F1: 0.938 - Macro Precision: 0.925 - Micro Precision: 0.938 - Weighted Precision: 0.939 - Macro Recall: 0.890 - Micro Recall: 0.938 - Weighted Recall: 0.938 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/huggingface/autotrain-emotions-1374752889 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("huggingface/autotrain-emotions-1374752889", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("huggingface/autotrain-emotions-1374752889", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
BatuhanYilmaz/dummy-model
[ "tf", "camembert", "fill-mask", "transformers", "generated_from_keras_callback", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "CamembertForMaskedLM" ], "model_type": "camembert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - huggingface/autotrain-data-emotions co2_eq_emissions: emissions: 18.738323825083565 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1374752890 - CO2 Emissions (in grams): 18.7383 ## Validation Metrics - Loss: 0.141 - Accuracy: 0.939 - Macro F1: 0.905 - Micro F1: 0.939 - Weighted F1: 0.939 - Macro Precision: 0.914 - Micro Precision: 0.939 - Weighted Precision: 0.942 - Macro Recall: 0.903 - Micro Recall: 0.939 - Weighted Recall: 0.939 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/huggingface/autotrain-emotions-1374752890 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("huggingface/autotrain-emotions-1374752890", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("huggingface/autotrain-emotions-1374752890", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
BatuhanYilmaz/mlm-finetuned-imdb
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - yelp_review_full metrics: - accuracy model-index: - name: Bert_Classifier results: - task: name: Text Classification type: text-classification dataset: name: yelp_review_full type: yelp_review_full config: yelp_review_full split: train args: yelp_review_full metrics: - name: Accuracy type: accuracy value: 0.634 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bert_Classifier This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset. It achieves the following results on the evaluation set: - Loss: 1.0546 - Accuracy: 0.634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9528 | 1.0 | 2500 | 0.9097 | 0.5985 | | 0.7607 | 2.0 | 5000 | 0.8969 | 0.627 | | 0.5039 | 3.0 | 7500 | 1.0546 | 0.634 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Baybars/debateGPT
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - MRC - Natural Questions List - xlm-roberta-large language: - multilingual --- # Model description An XLM-RoBERTa reading comprehension model for List Question Answering using a fine-tuned [xlm-roberta-large](https://huggingface.co/xlm-roberta-large/) model that is further fine-tuned on the list questions in the [Natural Questions](https://huggingface.co/datasets/natural_questions) dataset. ## Intended uses & limitations You can use the raw model for the reading comprehension task. Biases associated with the pre-existing language model, xlm-roberta-large, that we used may be present in our fine-tuned model, listqa_nq-task-xlm-roberta-large. ## Usage You can use this model directly with the [PrimeQA](https://github.com/primeqa/primeqa) pipeline for reading comprehension [listqa.ipynb](https://github.com/primeqa/primeqa/blob/main/notebooks/mrc/listqa.ipynb). ### BibTeX entry and citation info ```bibtex @article{kwiatkowski-etal-2019-natural, title = "Natural Questions: A Benchmark for Question Answering Research", author = "Kwiatkowski, Tom and Palomaki, Jennimaria and Redfield, Olivia and Collins, Michael and Parikh, Ankur and Alberti, Chris and Epstein, Danielle and Polosukhin, Illia and Devlin, Jacob and Lee, Kenton and Toutanova, Kristina and Jones, Llion and Kelcey, Matthew and Chang, Ming-Wei and Dai, Andrew M. and Uszkoreit, Jakob and Le, Quoc and Petrov, Slav", journal = "Transactions of the Association for Computational Linguistics", volume = "7", year = "2019", address = "Cambridge, MA", publisher = "MIT Press", url = "https://aclanthology.org/Q19-1026", doi = "10.1162/tacl_a_00276", pages = "452--466", } ``` ```bibtex @article{DBLP:journals/corr/abs-1911-02116, author = {Alexis Conneau and Kartikay Khandelwal and Naman Goyal and Vishrav Chaudhary and Guillaume Wenzek and Francisco Guzm{\'{a}}n and Edouard Grave and Myle Ott and Luke Zettlemoyer and Veselin Stoyanov}, title = {Unsupervised Cross-lingual Representation Learning at Scale}, journal = {CoRR}, volume = {abs/1911.02116}, year = {2019}, url = {http://arxiv.org/abs/1911.02116}, eprinttype = {arXiv}, eprint = {1911.02116}, timestamp = {Mon, 11 Nov 2019 18:38:09 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1911-02116.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
Bia18/Beatriz
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Imene/vit-base-patch16-224-in21k-Wr results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Imene/vit-base-patch16-224-in21k-Wr This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3104 - Train Accuracy: 0.9956 - Train Top-3-accuracy: 0.9981 - Validation Loss: 1.6041 - Validation Accuracy: 0.5770 - Validation Top-3-accuracy: 0.8035 - Epoch: 7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 1500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 3.8300 | 0.0583 | 0.1381 | 3.6801 | 0.0951 | 0.2203 | 0 | | 3.2915 | 0.2418 | 0.4557 | 3.0277 | 0.3004 | 0.5507 | 1 | | 2.6535 | 0.4438 | 0.7106 | 2.5932 | 0.3780 | 0.6546 | 2 | | 2.0541 | 0.6308 | 0.8575 | 2.2998 | 0.4556 | 0.6871 | 3 | | 1.4622 | 0.7924 | 0.9496 | 2.0054 | 0.5056 | 0.7234 | 4 | | 0.9098 | 0.9201 | 0.9887 | 1.8079 | 0.5695 | 0.7785 | 5 | | 0.5220 | 0.9821 | 0.9969 | 1.6444 | 0.5845 | 0.7922 | 6 | | 0.3104 | 0.9956 | 0.9981 | 1.6041 | 0.5770 | 0.8035 | 7 | ### Framework versions - Transformers 4.21.3 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
Biasface/DDDC
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- language: en license: mit tags: - vision - video-classification model-index: - name: nielsr/xclip-base-patch16-hmdb-2-shot results: - task: type: video-classification dataset: name: HMDB-51 type: hmdb-51 metrics: - type: top-1 accuracy value: 53.0 --- # X-CLIP (base-sized model) X-CLIP model (base-sized, patch resolution of 16) trained in a few-shot fashion (K=2) on [HMDB-51](https://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/). It was introduced in the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Ni et al. and first released in [this repository](https://github.com/microsoft/VideoX/tree/master/X-CLIP). This model was trained using 32 frames per video, at a resolution of 224x224. Disclaimer: The team releasing X-CLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description X-CLIP is a minimal extension of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip) for general video-language understanding. The model is trained in a contrastive way on (video, text) pairs. ![X-CLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/xclip_architecture.png) This allows the model to be used for tasks like zero-shot, few-shot or fully supervised video classification and video-text retrieval. ## Intended uses & limitations You can use the raw model for determining how well text goes with a given video. See the [model hub](https://huggingface.co/models?search=microsoft/xclip) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/xclip.html#). ## Training data This model was trained on [HMDB-51](https://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/). ### Preprocessing The exact details of preprocessing during training can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L247). The exact details of preprocessing during validation can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L285). During validation, one resizes the shorter edge of each frame, after which center cropping is performed to a fixed-size resolution (like 224x224). Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results This model achieves a top-1 accuracy of 53.0%.
BigDaddyNe1L/Hhaa
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en license: mit tags: - vision - video-classification model-index: - name: nielsr/xclip-base-patch16-hmdb-8-shot results: - task: type: video-classification dataset: name: HMDB-51 type: hmdb-51 metrics: - type: top-1 accuracy value: 62.8 --- # X-CLIP (base-sized model) X-CLIP model (base-sized, patch resolution of 16) trained in a few-shot fashion (K=8) on [HMDB-51](https://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/). It was introduced in the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Ni et al. and first released in [this repository](https://github.com/microsoft/VideoX/tree/master/X-CLIP). This model was trained using 32 frames per video, at a resolution of 224x224. Disclaimer: The team releasing X-CLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description X-CLIP is a minimal extension of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip) for general video-language understanding. The model is trained in a contrastive way on (video, text) pairs. ![X-CLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/xclip_architecture.png) This allows the model to be used for tasks like zero-shot, few-shot or fully supervised video classification and video-text retrieval. ## Intended uses & limitations You can use the raw model for determining how well text goes with a given video. See the [model hub](https://huggingface.co/models?search=microsoft/xclip) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/xclip.html#). ## Training data This model was trained on [HMDB-51](https://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/). ### Preprocessing The exact details of preprocessing during training can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L247). The exact details of preprocessing during validation can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L285). During validation, one resizes the shorter edge of each frame, after which center cropping is performed to a fixed-size resolution (like 224x224). Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results This model achieves a top-1 accuracy of 62.8%.
BigSalmon/FormalBerta
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- language: en license: mit tags: - vision - video-classification model-index: - name: nielsr/xclip-base-patch16-ucf-8-shot results: - task: type: video-classification dataset: name: UCF101 type: ucf101 metrics: - type: top-1 accuracy value: 88.3 --- # X-CLIP (base-sized model) X-CLIP model (base-sized, patch resolution of 16) trained in a few-shot fashion (K=8) on [UCF101](https://www.crcv.ucf.edu/data/UCF101.php). It was introduced in the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Ni et al. and first released in [this repository](https://github.com/microsoft/VideoX/tree/master/X-CLIP). This model was trained using 32 frames per video, at a resolution of 224x224. Disclaimer: The team releasing X-CLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description X-CLIP is a minimal extension of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip) for general video-language understanding. The model is trained in a contrastive way on (video, text) pairs. ![X-CLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/xclip_architecture.png) This allows the model to be used for tasks like zero-shot, few-shot or fully supervised video classification and video-text retrieval. ## Intended uses & limitations You can use the raw model for determining how well text goes with a given video. See the [model hub](https://huggingface.co/models?search=microsoft/xclip) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/xclip.html#). ## Training data This model was trained on [UCF101](https://www.crcv.ucf.edu/data/UCF101.php). ### Preprocessing The exact details of preprocessing during training can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L247). The exact details of preprocessing during validation can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L285). During validation, one resizes the shorter edge of each frame, after which center cropping is performed to a fixed-size resolution (like 224x224). Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results This model achieves a top-1 accuracy of 88.3%.
BigSalmon/FormalBerta2
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- language: en # <-- my language widget: - text: "Moody’s decision to upgrade the credit rating of Air Liquide is all the more remarkable as it is taking place in a more difficult macroeconomic and geopolitical environment. It underlines the Group’s capacity to maintain a high level of cash flow despite the fluctuations of the economy. Following Standard & Poor’s decision to upgrade Air Liquide’s credit rating, this decision recognizes the Group’s level of debt, which has been brought back to its pre-Airgas 2016 acquisition level in five years. It also reflects the largely demonstrated resilience of the Group’s business model." tags: - generated_from_trainer metrics: - f1 model-index: - name: roberta-sent-generali results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-sent-generali This model was fine-tuned on Roberta Large using a private dataset. It achieves the following results on the evaluation set: - Loss: 0.4885 - F1: 0.9104 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.355 | 1.0 | 262 | 0.3005 | 0.8829 | | 0.2201 | 2.0 | 524 | 0.3566 | 0.8930 | | 0.1293 | 3.0 | 786 | 0.3644 | 0.9193 | | 0.0662 | 4.0 | 1048 | 0.4202 | 0.9145 | | 0.026 | 5.0 | 1310 | 0.4885 | 0.9104 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
BigSalmon/GPTHeHe
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: mit --- ### Cheburashka on Stable Diffusion This is the `<cheburashka>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<cheburashka> 0](https://huggingface.co/sd-concepts-library/cheburashka/resolve/main/concept_images/0.jpeg) ![<cheburashka> 1](https://huggingface.co/sd-concepts-library/cheburashka/resolve/main/concept_images/3.jpeg) ![<cheburashka> 2](https://huggingface.co/sd-concepts-library/cheburashka/resolve/main/concept_images/1.jpeg) ![<cheburashka> 3](https://huggingface.co/sd-concepts-library/cheburashka/resolve/main/concept_images/2.jpeg)
BigSalmon/GPTIntro
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en license: mit tags: - vision - video-classification model-index: - name: nielsr/xclip-base-patch16-zero-shot results: - task: type: video-classification dataset: name: HMDB-51 type: hmdb-51 metrics: - type: top-1 accuracy value: 44.6 - task: type: video-classification dataset: name: UCF101 type: ucf101 metrics: - type: top-1 accuracy value: 72.0 - task: type: video-classification dataset: name: Kinetics-600 type: kinetics600 metrics: - type: top-1 accuracy value: 65.2 --- # X-CLIP (base-sized model) X-CLIP model (base-sized, patch resolution of 16) trained on [Kinetics-400](https://www.deepmind.com/open-source/kinetics). It was introduced in the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Ni et al. and first released in [this repository](https://github.com/microsoft/VideoX/tree/master/X-CLIP). This model was trained using 32 frames per video, at a resolution of 224x224. Disclaimer: The team releasing X-CLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description X-CLIP is a minimal extension of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip) for general video-language understanding. The model is trained in a contrastive way on (video, text) pairs. ![X-CLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/xclip_architecture.png) This allows the model to be used for tasks like zero-shot, few-shot or fully supervised video classification and video-text retrieval. ## Intended uses & limitations You can use the raw model for determining how well text goes with a given video. See the [model hub](https://huggingface.co/models?search=microsoft/xclip) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/xclip.html#). ## Training data This model was trained on [Kinetics 400](https://www.deepmind.com/open-source/kinetics). ### Preprocessing The exact details of preprocessing during training can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L247). The exact details of preprocessing during validation can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L285). During validation, one resizes the shorter edge of each frame, after which center cropping is performed to a fixed-size resolution (like 224x224). Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results This model achieves a zero-shot top-1 accuracy of 44.6% on HMDB-51, 72.0% on UCF-101 and 65.2% on Kinetics-600.
BigSalmon/GoodMaskResults
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: m-ctc-t-german results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # m-ctc-t-german This model is a fine-tuned version of [speechbrain/m-ctc-t-large](https://huggingface.co/speechbrain/m-ctc-t-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 14.0387 - Wer: 1.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 448 - eval_batch_size: 448 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 896 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.1883 | 1.0 | 511 | 3.5192 | 1.0 | | 3.1097 | 2.0 | 1022 | 11.0713 | 1.0000 | | 3.0541 | 3.0 | 1533 | 14.0387 | 1.0000 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0 - Datasets 2.4.0 - Tokenizers 0.12.1
BigSalmon/InformalToFormalLincoln14
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: mit --- ### karl's lzx 1 on Stable Diffusion This is the `<lzx>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<lzx> 0](https://huggingface.co/sd-concepts-library/karl-s-lzx-1/resolve/main/concept_images/0.jpeg) ![<lzx> 1](https://huggingface.co/sd-concepts-library/karl-s-lzx-1/resolve/main/concept_images/3.jpeg) ![<lzx> 2](https://huggingface.co/sd-concepts-library/karl-s-lzx-1/resolve/main/concept_images/1.jpeg) ![<lzx> 3](https://huggingface.co/sd-concepts-library/karl-s-lzx-1/resolve/main/concept_images/2.jpeg)
BigSalmon/InformalToFormalLincoln23
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - generated_from_keras_callback model-index: - name: VanessaSchenkel/padrao-unicamp-vanessa-finetuned-handscrafted results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # VanessaSchenkel/padrao-unicamp-vanessa-finetuned-handscrafted This model is a fine-tuned version of [VanessaSchenkel/padrao-unicamp-finetuned-news_commentary](https://huggingface.co/VanessaSchenkel/padrao-unicamp-finetuned-news_commentary) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.8875 - Validation Loss: 0.5701 - Train Bleu: 70.9943 - Train Gen Len: 8.8125 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Bleu | Train Gen Len | Epoch | |:----------:|:---------------:|:----------:|:-------------:|:-----:| | 0.8875 | 0.5701 | 70.9943 | 8.8125 | 0 | ### Framework versions - Transformers 4.21.3 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
BigSalmon/InformalToFormalLincoln24
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2022-09-07T19:20:53Z
--- license: mit --- ### canary cap on Stable Diffusion This is the `<canary-cap>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<canary-cap> 0](https://huggingface.co/sd-concepts-library/canary-cap/resolve/main/concept_images/0.jpeg) ![<canary-cap> 1](https://huggingface.co/sd-concepts-library/canary-cap/resolve/main/concept_images/3.jpeg) ![<canary-cap> 2](https://huggingface.co/sd-concepts-library/canary-cap/resolve/main/concept_images/4.jpeg) ![<canary-cap> 3](https://huggingface.co/sd-concepts-library/canary-cap/resolve/main/concept_images/1.jpeg) ![<canary-cap> 4](https://huggingface.co/sd-concepts-library/canary-cap/resolve/main/concept_images/2.jpeg)
BigSalmon/InformalToFormalLincolnDistilledGPT2
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.917741935483871 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7710 - Accuracy: 0.9177 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2892 | 1.0 | 318 | 3.2830 | 0.7432 | | 2.627 | 2.0 | 636 | 1.8728 | 0.8403 | | 1.5429 | 3.0 | 954 | 1.1554 | 0.8910 | | 1.0089 | 4.0 | 1272 | 0.8530 | 0.9129 | | 0.7938 | 5.0 | 1590 | 0.7710 | 0.9177 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
BigSalmon/MrLincoln3
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
17
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: albert-base-v2-finetuned-ours-DS results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-finetuned-ours-DS This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8481 - Accuracy: 0.665 - Precision: 0.6202 - Recall: 0.6137 - F1: 0.5973 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.0031150633640573e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 43 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.053 | 0.49 | 99 | 1.0208 | 0.56 | 0.5650 | 0.4382 | 0.3908 | | 0.8912 | 0.99 | 198 | 0.8648 | 0.59 | 0.3832 | 0.5279 | 0.4374 | | 0.777 | 1.48 | 297 | 0.7986 | 0.665 | 0.6579 | 0.5985 | 0.5704 | | 0.7391 | 1.98 | 396 | 0.8363 | 0.58 | 0.6389 | 0.5669 | 0.4966 | | 0.5861 | 2.48 | 495 | 0.8394 | 0.685 | 0.6380 | 0.6313 | 0.6312 | | 0.5711 | 2.97 | 594 | 0.9090 | 0.65 | 0.6102 | 0.5833 | 0.5900 | | 0.4296 | 3.46 | 693 | 0.8481 | 0.665 | 0.6202 | 0.6137 | 0.5973 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
BigSalmon/Points2
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-mask-prompt-e-nce-classification-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.812936507936508 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5320855614973262 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5370919881305638 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.67982212340189 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.808 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5526315789473685 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5393518518518519 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9199939731806539 - name: F1 (macro) type: f1_macro value: 0.9166018343423508 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8643192488262911 - name: F1 (macro) type: f1_macro value: 0.705268091615862 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6944745395449621 - name: F1 (macro) type: f1_macro value: 0.6876822233529178 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9670306739931835 - name: F1 (macro) type: f1_macro value: 0.9005229676637396 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8909432779692886 - name: F1 (macro) type: f1_macro value: 0.8931476213358844 --- # relbert/roberta-large-semeval2012-mask-prompt-e-nce-classification-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-e-nce-classification-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5320855614973262 - Accuracy on SAT: 0.5370919881305638 - Accuracy on BATS: 0.67982212340189 - Accuracy on U2: 0.5526315789473685 - Accuracy on U4: 0.5393518518518519 - Accuracy on Google: 0.808 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-e-nce-classification-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9199939731806539 - Micro F1 score on CogALexV: 0.8643192488262911 - Micro F1 score on EVALution: 0.6944745395449621 - Micro F1 score on K&H+N: 0.9670306739931835 - Micro F1 score on ROOT09: 0.8909432779692886 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-e-nce-classification-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.812936507936508 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-e-nce-classification-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>’s <mask> - loss_function: nce_logout - classification_loss: True - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 29 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-e-nce-classification-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Blabla/Pipipopo
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- ### Walter Wick photography on Stable Diffusion This is the `<walter-wick>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<walter-wick> 0](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/21.jpeg) ![<walter-wick> 1](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/19.jpeg) ![<walter-wick> 2](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/11.jpeg) ![<walter-wick> 3](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/18.jpeg) ![<walter-wick> 4](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/0.jpeg) ![<walter-wick> 5](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/9.jpeg) ![<walter-wick> 6](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/3.jpeg) ![<walter-wick> 7](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/10.jpeg) ![<walter-wick> 8](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/6.jpeg) ![<walter-wick> 9](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/14.jpeg) ![<walter-wick> 10](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/4.jpeg) ![<walter-wick> 11](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/1.jpeg) ![<walter-wick> 12](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/5.jpeg) ![<walter-wick> 13](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/2.jpeg) ![<walter-wick> 14](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/12.jpeg) ![<walter-wick> 15](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/13.jpeg) ![<walter-wick> 16](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/20.jpeg) ![<walter-wick> 17](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/7.jpeg) ![<walter-wick> 18](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/15.jpeg) ![<walter-wick> 19](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/16.jpeg) ![<walter-wick> 20](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/17.jpeg) ![<walter-wick> 21](https://huggingface.co/sd-concepts-library/walter-wick-photography/resolve/main/concept_images/8.jpeg)
Blerrrry/Kkk
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - wildreceipt model-index: - name: layoutlmv3-finetuned-wildreceipt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv3-finetuned-wildreceipt This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the wildreceipt dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2996 - eval_precision: 0.8566 - eval_recall: 0.8614 - eval_f1: 0.8590 - eval_accuracy: 0.9178 - eval_runtime: 51.7898 - eval_samples_per_second: 9.114 - eval_steps_per_second: 2.278 - epoch: 4.97 - step: 1577 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 4000 ### Framework versions - Transformers 4.22.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
BobBraico/bert-finetuned-ner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-08T02:00:45Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/tasotaku/ddpm-butterflies-128/tensorboard?#scalars)
BobBraico/distilbert-base-uncased-finetuned-imdb-accelerate
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-08T02:05:04Z
--- license: mit --- ### Smiling Friend style on Stable Diffusion This is the `<smilingfriends-cartoon>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<smilingfriends-cartoon> 0](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/21.jpeg) ![<smilingfriends-cartoon> 1](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/14.jpeg) ![<smilingfriends-cartoon> 2](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/54.jpeg) ![<smilingfriends-cartoon> 3](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/70.jpeg) ![<smilingfriends-cartoon> 4](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/22.jpeg) ![<smilingfriends-cartoon> 5](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/15.jpeg) ![<smilingfriends-cartoon> 6](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/50.jpeg) ![<smilingfriends-cartoon> 7](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/36.jpeg) ![<smilingfriends-cartoon> 8](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/5.jpeg) ![<smilingfriends-cartoon> 9](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/4.jpeg) ![<smilingfriends-cartoon> 10](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/26.jpeg) ![<smilingfriends-cartoon> 11](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/40.jpeg) ![<smilingfriends-cartoon> 12](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/49.jpeg) ![<smilingfriends-cartoon> 13](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/44.jpeg) ![<smilingfriends-cartoon> 14](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/8.jpeg) ![<smilingfriends-cartoon> 15](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/62.jpeg) ![<smilingfriends-cartoon> 16](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/46.jpeg) ![<smilingfriends-cartoon> 17](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/12.jpeg) ![<smilingfriends-cartoon> 18](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/42.jpeg) ![<smilingfriends-cartoon> 19](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/37.jpeg) ![<smilingfriends-cartoon> 20](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/47.jpeg) ![<smilingfriends-cartoon> 21](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/29.jpeg) ![<smilingfriends-cartoon> 22](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/71.jpeg) ![<smilingfriends-cartoon> 23](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/32.jpeg) ![<smilingfriends-cartoon> 24](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/66.jpeg) ![<smilingfriends-cartoon> 25](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/67.jpeg) ![<smilingfriends-cartoon> 26](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/20.jpeg) ![<smilingfriends-cartoon> 27](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/75.jpeg) ![<smilingfriends-cartoon> 28](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/45.jpeg) ![<smilingfriends-cartoon> 29](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/13.jpeg) ![<smilingfriends-cartoon> 30](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/72.jpeg) ![<smilingfriends-cartoon> 31](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/61.jpeg) ![<smilingfriends-cartoon> 32](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/43.jpeg) ![<smilingfriends-cartoon> 33](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/76.jpeg) ![<smilingfriends-cartoon> 34](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/38.jpeg) ![<smilingfriends-cartoon> 35](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/53.jpeg) ![<smilingfriends-cartoon> 36](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/7.jpeg) ![<smilingfriends-cartoon> 37](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/6.jpeg) ![<smilingfriends-cartoon> 38](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/51.jpeg) ![<smilingfriends-cartoon> 39](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/56.jpeg) ![<smilingfriends-cartoon> 40](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/55.jpeg) ![<smilingfriends-cartoon> 41](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/52.jpeg) ![<smilingfriends-cartoon> 42](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/1.jpeg) ![<smilingfriends-cartoon> 43](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/10.jpeg) ![<smilingfriends-cartoon> 44](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/28.jpeg) ![<smilingfriends-cartoon> 45](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/34.jpeg) ![<smilingfriends-cartoon> 46](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/68.jpeg) ![<smilingfriends-cartoon> 47](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/39.jpeg) ![<smilingfriends-cartoon> 48](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/31.jpeg) ![<smilingfriends-cartoon> 49](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/69.jpeg) ![<smilingfriends-cartoon> 50](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/30.jpeg) ![<smilingfriends-cartoon> 51](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/35.jpeg) ![<smilingfriends-cartoon> 52](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/77.jpeg) ![<smilingfriends-cartoon> 53](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/60.jpeg) ![<smilingfriends-cartoon> 54](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/24.jpeg) ![<smilingfriends-cartoon> 55](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/59.jpeg) ![<smilingfriends-cartoon> 56](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/64.jpeg) ![<smilingfriends-cartoon> 57](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/74.jpeg) ![<smilingfriends-cartoon> 58](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/41.jpeg) ![<smilingfriends-cartoon> 59](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/2.jpeg) ![<smilingfriends-cartoon> 60](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/65.jpeg) ![<smilingfriends-cartoon> 61](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/17.jpeg) ![<smilingfriends-cartoon> 62](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/63.jpeg) ![<smilingfriends-cartoon> 63](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/18.jpeg) ![<smilingfriends-cartoon> 64](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/48.jpeg) ![<smilingfriends-cartoon> 65](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/19.jpeg) ![<smilingfriends-cartoon> 66](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/23.jpeg) ![<smilingfriends-cartoon> 67](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/58.jpeg) ![<smilingfriends-cartoon> 68](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/3.jpeg) ![<smilingfriends-cartoon> 69](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/73.jpeg) ![<smilingfriends-cartoon> 70](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/27.jpeg) ![<smilingfriends-cartoon> 71](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/57.jpeg) ![<smilingfriends-cartoon> 72](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/16.jpeg) ![<smilingfriends-cartoon> 73](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/33.jpeg) ![<smilingfriends-cartoon> 74](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/9.jpeg) ![<smilingfriends-cartoon> 75](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/25.jpeg) ![<smilingfriends-cartoon> 76](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/0.jpeg) ![<smilingfriends-cartoon> 77](https://huggingface.co/sd-concepts-library/smiling-friend-style/resolve/main/concept_images/11.jpeg)
Boondong/Wandee
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-08T03:05:17Z
--- license: apache-2.0 --- ## Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Data](#training-data) 4. [Risks and Limitations](#risks-and-limitations) 5. [Evaluation](#evaluation) 6. [Recommendations](#recommendations) 7. [Glossary and Calculations](#glossary-and-calculations) 8. [More Information](#more-information) 9. [Model Card Authors](#model-card-authors)
BrunoNogueira/DialoGPT-kungfupanda
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: mit --- ### Monster Girl on Stable Diffusion This is the `<monster-girl>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<monster-girl> 0](https://huggingface.co/sd-concepts-library/monster-girl/resolve/main/concept_images/1.jpeg) ![<monster-girl> 1](https://huggingface.co/sd-concepts-library/monster-girl/resolve/main/concept_images/2.jpeg) ![<monster-girl> 2](https://huggingface.co/sd-concepts-library/monster-girl/resolve/main/concept_images/3.jpeg) ![<monster-girl> 3](https://huggingface.co/sd-concepts-library/monster-girl/resolve/main/concept_images/0.jpeg)
Brunomezenga/NN
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en thumbnail: http://www.huggingtweets.com/sanmemero/1662612412375/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1547249514485927937/xVT7Zk4l_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">San Memero 🇨🇺</div> <div style="text-align: center; font-size: 14px;">@sanmemero</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from San Memero 🇨🇺. | Data | San Memero 🇨🇺 | | --- | --- | | Tweets downloaded | 3211 | | Retweets | 251 | | Short tweets | 822 | | Tweets kept | 2138 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3llp69ch/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sanmemero's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/kf2jjg02) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/kf2jjg02/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/sanmemero') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Bryanwong/wangchanberta-ner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en thumbnail: http://www.huggingtweets.com/mariojpenton-mjorgec1994/1662625679744/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1539758332877197313/NRB0lc5a_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1526213406918905856/28mTAbCu_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Mario J. Pentón & Mag Jorge Castro🇨🇺</div> <div style="text-align: center; font-size: 14px;">@mariojpenton-mjorgec1994</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Mario J. Pentón & Mag Jorge Castro🇨🇺. | Data | Mario J. Pentón | Mag Jorge Castro🇨🇺 | | --- | --- | --- | | Tweets downloaded | 3244 | 3249 | | Retweets | 673 | 0 | | Short tweets | 120 | 236 | | Tweets kept | 2451 | 3013 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3kbivb0e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mariojpenton-mjorgec1994's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3m6kiha6) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3m6kiha6/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/mariojpenton-mjorgec1994') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
855
2022-09-08T09:21:14Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: greek_legal_bert_v2-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # greek_legal_bert_v2-finetuned-ner This model is a fine-tuned version of [alexaapo/greek_legal_bert_v2](https://huggingface.co/alexaapo/greek_legal_bert_v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0900 - Precision: 0.8424 - Recall: 0.8638 - F1: 0.8530 - Accuracy: 0.9775 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 0.64 | 250 | 0.0839 | 0.7859 | 0.8539 | 0.8185 | 0.9737 | | 0.1127 | 1.29 | 500 | 0.0783 | 0.8092 | 0.8569 | 0.8324 | 0.9759 | | 0.1127 | 1.93 | 750 | 0.0743 | 0.8284 | 0.8446 | 0.8364 | 0.9766 | | 0.0538 | 2.58 | 1000 | 0.0816 | 0.8243 | 0.8597 | 0.8416 | 0.9774 | | 0.0538 | 3.22 | 1250 | 0.0900 | 0.8424 | 0.8638 | 0.8530 | 0.9776 | | 0.0346 | 3.87 | 1500 | 0.0890 | 0.8401 | 0.8597 | 0.8498 | 0.9770 | | 0.0346 | 4.51 | 1750 | 0.0964 | 0.8342 | 0.8576 | 0.8457 | 0.9768 | | 0.0233 | 5.15 | 2000 | 0.1094 | 0.8336 | 0.8645 | 0.8488 | 0.9768 | | 0.0233 | 5.8 | 2250 | 0.1110 | 0.8456 | 0.8549 | 0.8502 | 0.9777 | | 0.0161 | 6.44 | 2500 | 0.1224 | 0.8408 | 0.8535 | 0.8471 | 0.9769 | | 0.0161 | 7.09 | 2750 | 0.1281 | 0.8347 | 0.8624 | 0.8483 | 0.9770 | | 0.0114 | 7.73 | 3000 | 0.1268 | 0.8397 | 0.8573 | 0.8484 | 0.9773 | | 0.0114 | 8.38 | 3250 | 0.1308 | 0.8388 | 0.8549 | 0.8468 | 0.9771 | | 0.0088 | 9.02 | 3500 | 0.1301 | 0.8412 | 0.8559 | 0.8485 | 0.9772 | | 0.0088 | 9.66 | 3750 | 0.1368 | 0.8396 | 0.8604 | 0.8499 | 0.9772 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
71
null
--- license: mit --- ### Party girl on Stable Diffusion This is the `<party-girl>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<party-girl> 0](https://huggingface.co/sd-concepts-library/party-girl/resolve/main/concept_images/5.jpeg) ![<party-girl> 1](https://huggingface.co/sd-concepts-library/party-girl/resolve/main/concept_images/4.jpeg) ![<party-girl> 2](https://huggingface.co/sd-concepts-library/party-girl/resolve/main/concept_images/1.jpeg) ![<party-girl> 3](https://huggingface.co/sd-concepts-library/party-girl/resolve/main/concept_images/2.jpeg) ![<party-girl> 4](https://huggingface.co/sd-concepts-library/party-girl/resolve/main/concept_images/3.jpeg) ![<party-girl> 5](https://huggingface.co/sd-concepts-library/party-girl/resolve/main/concept_images/0.jpeg)
CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:1905.05700", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
--- license: mit --- ### Dicoo on Stable Diffusion This is the `<Dicoo>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<Dicoo> 0](https://huggingface.co/sd-concepts-library/dicoo/resolve/main/concept_images/4.jpeg) ![<Dicoo> 1](https://huggingface.co/sd-concepts-library/dicoo/resolve/main/concept_images/1.jpeg) ![<Dicoo> 2](https://huggingface.co/sd-concepts-library/dicoo/resolve/main/concept_images/2.jpeg) ![<Dicoo> 3](https://huggingface.co/sd-concepts-library/dicoo/resolve/main/concept_images/3.jpeg) ![<Dicoo> 4](https://huggingface.co/sd-concepts-library/dicoo/resolve/main/concept_images/0.jpeg)
CLEE/CLEE
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en license: mit tags: - vision - video-classification model-index: - name: nielsr/xclip-large-patch14-kinetics-600 results: - task: type: video-classification dataset: name: Kinetics 400 type: kinetics-400 metrics: - type: top-1 accuracy value: 88.3 - type: top-5 accuracy value: 97.7 --- # X-CLIP (large-sized model) X-CLIP model (large-sized, patch resolution of 14) trained fully-supervised on [Kinetics-600](https://www.deepmind.com/open-source/kinetics). It was introduced in the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Ni et al. and first released in [this repository](https://github.com/microsoft/VideoX/tree/master/X-CLIP). This model was trained using 8 frames per video, at a resolution of 224x224. Disclaimer: The team releasing X-CLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description X-CLIP is a minimal extension of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip) for general video-language understanding. The model is trained in a contrastive way on (video, text) pairs. ![X-CLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/xclip_architecture.png) This allows the model to be used for tasks like zero-shot, few-shot or fully supervised video classification and video-text retrieval. ## Intended uses & limitations You can use the raw model for determining how well text goes with a given video. See the [model hub](https://huggingface.co/models?search=microsoft/xclip) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/xclip.html#). ## Training data This model was trained on [Kinetics-600](https://www.deepmind.com/open-source/kinetics). ### Preprocessing The exact details of preprocessing during training can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L247). The exact details of preprocessing during validation can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L285). During validation, one resizes the shorter edge of each frame, after which center cropping is performed to a fixed-size resolution (like 224x224). Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results This model achieves a top-1 accuracy of 88.3% and a top-5 accuracy of 97.7%.
CM-CA/Cartman
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: roberta_large-filtered_simple-chunk-conll2003_0907_v1 results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: train args: conll2003 metrics: - name: Precision type: precision value: 0.9047619047619048 - name: Recall type: recall value: 0.8908827785817656 - name: F1 type: f1 value: 0.8977687035146565 - name: Accuracy type: accuracy value: 0.96574184566428 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta_large-filtered_simple-chunk-conll2003_0907_v1 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.1299 - Precision: 0.9048 - Recall: 0.8909 - F1: 0.8978 - Accuracy: 0.9657 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2061 | 1.0 | 878 | 0.1237 | 0.9077 | 0.9008 | 0.9042 | 0.9661 | | 0.0999 | 2.0 | 1756 | 0.1131 | 0.9148 | 0.9079 | 0.9113 | 0.9687 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Capreolus/birch-bert-large-msmarco_mb
[ "pytorch", "tf", "jax", "bert", "next-sentence-prediction", "transformers" ]
null
{ "architectures": [ "BertForNextSentencePrediction" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
2022-09-08T16:05:57Z
Attempt at making a NWScript model trained on scripts from Neverwinter Nights. ## Intended Use and Limitations The intention is for the model to generate a script based on users written comment strings.
Captain-1337/CrudeBERT
[ "pytorch", "bert", "text-classification", "arxiv:1908.10063", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-basil results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-basil This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2870 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9097 | 1.0 | 780 | 1.4978 | | 1.5358 | 2.0 | 1560 | 1.3439 | | 1.4259 | 3.0 | 2340 | 1.2881 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Tokenizers 0.12.1
Captain272/lstm
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-08T16:14:06Z
--- license: mit --- ### apulian-rooster-v0.1 on Stable Diffusion -- # Inspired by the design of the Galletto (rooster) typical of ceramics and pottery made in Grottaglie, Puglia (Italy). This is the `<apulian-rooster-v0.1>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<apulian-rooster-v0.1> 0](https://huggingface.co/sd-concepts-library/apulian-rooster-v0-1/resolve/main/concept_images/5.jpeg) ![<apulian-rooster-v0.1> 1](https://huggingface.co/sd-concepts-library/apulian-rooster-v0-1/resolve/main/concept_images/4.jpeg) ![<apulian-rooster-v0.1> 2](https://huggingface.co/sd-concepts-library/apulian-rooster-v0-1/resolve/main/concept_images/1.jpeg) ![<apulian-rooster-v0.1> 3](https://huggingface.co/sd-concepts-library/apulian-rooster-v0-1/resolve/main/concept_images/2.jpeg) ![<apulian-rooster-v0.1> 4](https://huggingface.co/sd-concepts-library/apulian-rooster-v0-1/resolve/main/concept_images/3.jpeg) ![<apulian-rooster-v0.1> 5](https://huggingface.co/sd-concepts-library/apulian-rooster-v0-1/resolve/main/concept_images/0.jpeg)
Carlork314/Carlos
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-08T16:15:52Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/bhorine/ddpm-butterflies-128/tensorboard?#scalars)
Carlork314/Xd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en thumbnail: http://www.huggingtweets.com/piemadd/1662653961299/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1521050682983424003/yERaHagV_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Piero Maddaleni 2027</div> <div style="text-align: center; font-size: 14px;">@piemadd</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Piero Maddaleni 2027. | Data | Piero Maddaleni 2027 | | --- | --- | | Tweets downloaded | 3242 | | Retweets | 322 | | Short tweets | 540 | | Tweets kept | 2380 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jem4xdn0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @piemadd's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6e8s7bst) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6e8s7bst/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/piemadd') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Cathy/reranking_model
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
2022-09-08T16:57:36Z
--- language: nl license: mit --- # MedRoBERTa.nl finetuned for negation ## Description This model is a finetuned RoBERTa-based model pre-trained from scratch on Dutch hospital notes sourced from Electronic Health Records. All code used for the creation of MedRoBERTa.nl can be found at https://github.com/cltl-students/verkijk_stella_rma_thesis_dutch_medical_language_model. The publication associated with the negation detection task can be found at https://arxiv.org/abs/2209.00470. The code for finetuning the model can be found at https://github.com/umcu/negation-detection. ## Minimal example ```python tokenizer = AutoTokenizer\ .from_pretrained("UMCU/MedRoBERTa.nl_NegationDetection") model = AutoModelForTokenClassification\ .from_pretrained("UMCU/MedRoBERTa.nl_NegationDetection") some_text = "De patient was niet aanspreekbaar en hij zag er grauw uit. \ Hij heeft de inspanningstest echter goed doorstaan." inputs = tokenizer(some_text, return_tensors='pt') output = model.forward(inputs) probas = torch.nn.functional.softmax(output.logits[0]).detach().numpy() # koppel aan tokens input_tokens = tokenizer.convert_ids_to_tokens(inputs['input_ids'][0]) target_map = {0: 'B-Negated', 1:'B-NotNegated',2:'I-Negated',3:'I-NotNegated'} results = [{'token': input_tokens[idx], 'proba_negated': proba_arr[0]+proba_arr[2], 'proba_not_negated': proba_arr[1]+proba_arr[3] } for idx,proba_arr in enumerate(probas)] ``` It is perhaps good to note that we assume the [Inside-Outside-Beginning](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) format. ## Intended use The model is finetuned for negation detection on Dutch clinical text. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch. This particular model is trained on a 512-max token windows surrounding the concept-to-be negated. Note that we also trained a biLSTM which can be incorporated in [MedCAT](https://github.com/CogStack/MedCAT). ## Data The pre-trained model was trained on nearly 10 million hospital notes from the Amsterdam University Medical Centres. The training data was anonymized before starting the pre-training procedure. The finetuning was performed on the Erasmus Dutch Clinical Corpus (EDCC), and can be obtained through Jan Kors ([email protected]). The EDCC is described here: https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-014-0373-3 ## Authors MedRoBERTa.nl: Stella Verkijk, Piek Vossen, Finetuning: Bram van Es, Sebastiaan Arends. ## Contact If you are having problems with this model please add an issue on our git: https://github.com/umcu/negation-detection/issues ## Usage If you use the model in your work please refer either to https://doi.org/10.5281/zenodo.6980076 or https://doi.org/10.48550/arXiv.2209.00470 ## References Paper: Verkijk, S. & Vossen, P. (2022) MedRoBERTa.nl: A Language Model for Dutch Electronic Health Records. Computational Linguistics in the Netherlands Journal, 11. Paper: Bram van Es, Leon C. Reteig, Sander C. Tan, Marijn Schraagen, Myrthe M. Hemker, Sebastiaan R.S. Arends, Miguel A.R. Rios, Saskia Haitjema (2022): Negation detection in Dutch clinical texts: an evaluation of rule-based and machine learning methods, Arxiv
Cedille/fr-boris
[ "pytorch", "gptj", "text-generation", "fr", "dataset:c4", "arxiv:2202.03371", "transformers", "causal-lm", "license:mit", "has_space" ]
text-generation
{ "architectures": [ "GPTJForCausalLM" ], "model_type": "gptj", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
401
2022-09-08T16:58:04Z
--- license: mit --- ### fractal on Stable Diffusion This is the `<fractal>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](#) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](#). The images composing the token are here: https://huggingface.co/datasets/Nbardy/Fractal-photos Thank you to the photographers. Who graciously published these photos for free non-commercial use. Each photo has the artists name in the dataset hosted on hugging face
dccuchile/albert-base-spanish-finetuned-pos
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: mit --- ### Nebula on Stable Diffusion This is the `<nebula>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<nebula> 0](https://huggingface.co/sd-concepts-library/nebula/resolve/main/concept_images/5.jpeg) ![<nebula> 1](https://huggingface.co/sd-concepts-library/nebula/resolve/main/concept_images/4.jpeg) ![<nebula> 2](https://huggingface.co/sd-concepts-library/nebula/resolve/main/concept_images/1.jpeg) ![<nebula> 3](https://huggingface.co/sd-concepts-library/nebula/resolve/main/concept_images/2.jpeg) ![<nebula> 4](https://huggingface.co/sd-concepts-library/nebula/resolve/main/concept_images/3.jpeg) ![<nebula> 5](https://huggingface.co/sd-concepts-library/nebula/resolve/main/concept_images/0.jpeg)
dccuchile/albert-large-spanish-finetuned-pawsx
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
2022-09-08T18:13:31Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5616581968995631 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7956 - Matthews Correlation: 0.5617 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5235 | 1.0 | 535 | 0.5394 | 0.4192 | | 0.3498 | 2.0 | 1070 | 0.4989 | 0.5065 | | 0.2343 | 3.0 | 1605 | 0.5506 | 0.5518 | | 0.1744 | 4.0 | 2140 | 0.7471 | 0.5354 | | 0.1243 | 5.0 | 2675 | 0.7956 | 0.5617 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 1.18.4 - Tokenizers 0.12.1
dccuchile/albert-xlarge-spanish-finetuned-ner
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-prompt-a-nce-classification-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7666666666666666 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3342245989304813 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.33827893175074186 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3885491939966648 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.542 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3201754385964912 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.33564814814814814 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8865451258098539 - name: F1 (macro) type: f1_macro value: 0.8770785182418419 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8401408450704225 - name: F1 (macro) type: f1_macro value: 0.6242491296371133 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6749729144095341 - name: F1 (macro) type: f1_macro value: 0.6505812342477592 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9607706753842944 - name: F1 (macro) type: f1_macro value: 0.8781957733610742 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8994045753682232 - name: F1 (macro) type: f1_macro value: 0.8968786782259857 --- # relbert/roberta-large-semeval2012-average-prompt-a-nce-classification-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-a-nce-classification-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3342245989304813 - Accuracy on SAT: 0.33827893175074186 - Accuracy on BATS: 0.3885491939966648 - Accuracy on U2: 0.3201754385964912 - Accuracy on U4: 0.33564814814814814 - Accuracy on Google: 0.542 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-a-nce-classification-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8865451258098539 - Micro F1 score on CogALexV: 0.8401408450704225 - Micro F1 score on EVALution: 0.6749729144095341 - Micro F1 score on K&H+N: 0.9607706753842944 - Micro F1 score on ROOT09: 0.8994045753682232 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-a-nce-classification-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7666666666666666 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-a-nce-classification-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj> - loss_function: nce_logout - classification_loss: True - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 1 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-a-nce-classification-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
dccuchile/albert-xlarge-spanish-finetuned-qa-mlqa
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2022-09-08T19:53:10Z
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/paraphrase-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-mpnet-base-v2') model = AutoModel.from_pretrained('sentence-transformers/paraphrase-mpnet-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-mpnet-base-v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
CennetOguz/distilbert-base-uncased-finetuned-recipe-1
[ "pytorch", "tensorboard", "distilbert", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2022-09-09T02:43:59Z
--- language: ja license: cc-by-sa-4.0 --- # electra-base-cyberbullying This is an [ELECTRA](https://github.com/google-research/electra) Small model for the Japanese language finetuned for automatic cyberbullying detection. The model was based on [Izumi Lab ELECTRA small Japanese discriminator](https://huggingface.co/izumi-lab/electra-small-japanese-discriminator), and later finetuned on a balanced dataset created by unifying two datasets, namely "Harmful BBS Japanese comments dataset" and "Twitter Japanese cyberbullying dataset". ## Licenses The finetuned model with all attached files is licensed under [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/), or Creative Commons Attribution-ShareAlike 4.0 International License. <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a> ## Citations Please, cite this model using the following citation. ``` @inproceedings{tanabe2022electra-small-cyberbullying, title={北見工業大学 テキスト情報処理研究室 ELECTRA Small ネットいじめ検出モデル (Izumi Lab ver.)}, author={田邊 威裕 and プタシンスキ ミハウ and エロネン ユーソ and 桝井 文人}, publisher={HuggingFace}, year={2022}, url = "https://huggingface.co/kit-nlp/electra-small-japanese-discriminator-cyberbullying" } ```
CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2022-09-09T02:52:52Z
--- language: en thumbnail: http://www.huggingtweets.com/amouranth/1668957567411/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1038692008720719877/X7uKFWQc_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Amouranth</div> <div style="text-align: center; font-size: 14px;">@amouranth</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Amouranth. | Data | Amouranth | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 537 | | Short tweets | 588 | | Tweets kept | 2125 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/v368cfyz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @amouranth's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/27hif4yr) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/27hif4yr/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/amouranth') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Cheatham/xlm-roberta-large-finetuned-d1
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- license: apache-2.0 tags: - object-detection - vision datasets: - coco widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg example_title: Savanna - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg example_title: Football Match - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg example_title: Airport --- # Conditional DETR model with ResNet-50 backbone Conditional DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Meng et al. and first released in [this repository](https://github.com/Atten4Vis/ConditionalDETR). ## Model description The recently-developed DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings for localizing the four extremities and predicting the box, which increases the need for high-quality content embeddings and thus the training difficulty. Our approach, named conditional DETR, learns a conditional spatial query from the decoder embedding for decoder multi-head cross-attention. The benefit is that through the conditional spatial query, each cross-attention head is able to attend to a band containing a distinct region, e.g., one object extremity or a region inside the object box. This narrows down the spatial range for localizing the distinct regions for object classification and box regression, thus relaxing the dependence on the content embeddings and easing the training. Empirical results show that conditional DETR converges 6.7× faster for the backbones R50 and R101 and 10× faster for stronger backbones DC5-R50 and DC5-R101. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/conditional_detr_curve.jpg) ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=microsoft/conditional-detr) to look for all available Conditional DETR models. ### How to use Here is how to use this model: ```python from transformers import AutoImageProcessor, ConditionalDetrForObjectDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("microsoft/conditional-detr-resnet-50") model = ConditionalDetrForObjectDetection.from_pretrained("microsoft/conditional-detr-resnet-50") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API # let's only keep detections with score > 0.7 target_sizes = torch.tensor([image.size[::-1]]) results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.7)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ``` This should output: ``` Detected remote with confidence 0.833 at location [38.31, 72.1, 177.63, 118.45] Detected cat with confidence 0.831 at location [9.2, 51.38, 321.13, 469.0] Detected cat with confidence 0.804 at location [340.3, 16.85, 642.93, 370.95] ``` Currently, both the feature extractor and model support PyTorch. ## Training data The Conditional DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ### BibTeX entry and citation info ```bibtex @inproceedings{MengCFZLYS021, author = {Depu Meng and Xiaokang Chen and Zejia Fan and Gang Zeng and Houqiang Li and Yuhui Yuan and Lei Sun and Jingdong Wang}, title = {Conditional {DETR} for Fast Training Convergence}, booktitle = {2021 {IEEE/CVF} International Conference on Computer Vision, {ICCV} 2021, Montreal, QC, Canada, October 10-17, 2021}, } ```
Cheatham/xlm-roberta-large-finetuned-d12
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- tags: - generated_from_trainer datasets: - i2b22014 metrics: - precision - recall - f1 - accuracy model-index: - name: electramed-small-deid2014-ner-v3 results: - task: name: Token Classification type: token-classification dataset: name: i2b22014 type: i2b22014 config: i2b22014-deid split: train args: i2b22014-deid metrics: - name: Precision type: precision value: 0.7776378519384726 - name: Recall type: recall value: 0.7946502435885652 - name: F1 type: f1 value: 0.7860520094562647 - name: Accuracy type: accuracy value: 0.9908687950002661 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electramed-small-deid2014-ner-v3 This model is a fine-tuned version of [giacomomiolo/electramed_small_scivocab](https://huggingface.co/giacomomiolo/electramed_small_scivocab) on the i2b22014 dataset. It achieves the following results on the evaluation set: - Loss: 0.0354 - Precision: 0.7776 - Recall: 0.7947 - F1: 0.7861 - Accuracy: 0.9909 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0125 | 1.0 | 1838 | 0.1338 | 0.3514 | 0.3812 | 0.3657 | 0.9715 | | 0.0032 | 2.0 | 3676 | 0.0856 | 0.4444 | 0.5156 | 0.4774 | 0.9778 | | 0.0012 | 3.0 | 5514 | 0.0678 | 0.5222 | 0.5994 | 0.5581 | 0.9819 | | 0.0006 | 4.0 | 7352 | 0.0547 | 0.6900 | 0.7025 | 0.6962 | 0.9865 | | 0.018 | 5.0 | 9190 | 0.0466 | 0.7227 | 0.7468 | 0.7345 | 0.9881 | | 0.0002 | 6.0 | 11028 | 0.0419 | 0.7396 | 0.7664 | 0.7528 | 0.9891 | | 0.0002 | 7.0 | 12866 | 0.0390 | 0.7730 | 0.7693 | 0.7712 | 0.9901 | | 0.0002 | 8.0 | 14704 | 0.0368 | 0.7778 | 0.7822 | 0.7800 | 0.9906 | | 0.0001 | 9.0 | 16542 | 0.0359 | 0.7765 | 0.7898 | 0.7831 | 0.9907 | | 0.0001 | 10.0 | 18380 | 0.0354 | 0.7776 | 0.7947 | 0.7861 | 0.9909 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Chester/traffic-rec
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="AliKahe/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Chinat/test-classifier
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-09T09:06:46Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: NLP2122_FranciosoDonato results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NLP2122_FranciosoDonato This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8885 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.21.3 - Pytorch 1.11.0+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
Ching/negation_detector
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: - de license: mit datasets: - germaner metrics: - precision - recall - f1 - accuracy model-index: - name: gbert-large-germaner results: - task: name: Token Classification type: token-classification dataset: name: germaner type: germaner args: default metrics: - name: precision type: precision value: 0.8755112474437627 - name: recall type: recall value: 0.8861578266494179 - name: f1 type: f1 value: 0.8808023659508808 - name: accuracy type: accuracy value: 0.9788673918458856 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gbert-large-germaner This model is a fine-tuned version of [deepset/gbert-large](https://huggingface.co/deepset/gbert-large) on the germaner dataset. It achieves the following results on the evaluation set: - precision: 0.8755 - recall: 0.8862 - f1: 0.8808 - accuracy: 0.9789 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - num_train_epochs: 5 - train_batch_size: 8 - eval_batch_size: 8 - learning_rate: 3e-05 - weight_decay_rate: 0.01 - num_warmup_steps: 0 - fp16: True ### Framework versions - Transformers 4.21.3 - Datasets 1.18.0 - Tokenizers 0.12.1
ChoboAvenger/DialoGPT-small-DocBot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-09T09:19:36Z
--- tags: - generated_from_trainer model-index: - name: DNADebertaK6_Zebrafish results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DNADebertaK6_Zebrafish This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4958 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 600001 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 4.1727 | 0.59 | 20000 | 1.8535 | | 1.8381 | 1.18 | 40000 | 1.7512 | | 1.7561 | 1.77 | 60000 | 1.7235 | | 1.7281 | 2.36 | 80000 | 1.7019 | | 1.7065 | 2.95 | 100000 | 1.6822 | | 1.6876 | 3.54 | 120000 | 1.6639 | | 1.6718 | 4.13 | 140000 | 1.6501 | | 1.6562 | 4.71 | 160000 | 1.6350 | | 1.6429 | 5.3 | 180000 | 1.6211 | | 1.6313 | 5.89 | 200000 | 1.6102 | | 1.6207 | 6.48 | 220000 | 1.6001 | | 1.6099 | 7.07 | 240000 | 1.5902 | | 1.6 | 7.66 | 260000 | 1.5799 | | 1.5925 | 8.25 | 280000 | 1.5726 | | 1.5847 | 8.84 | 300000 | 1.5645 | | 1.5783 | 9.43 | 320000 | 1.5596 | | 1.5712 | 10.02 | 340000 | 1.5510 | | 1.5656 | 10.61 | 360000 | 1.5452 | | 1.5598 | 11.2 | 380000 | 1.5410 | | 1.5548 | 11.79 | 400000 | 1.5342 | | 1.5497 | 12.38 | 420000 | 1.5293 | | 1.546 | 12.96 | 440000 | 1.5241 | | 1.5397 | 13.55 | 460000 | 1.5214 | | 1.5365 | 14.14 | 480000 | 1.5164 | | 1.5321 | 14.73 | 500000 | 1.5115 | | 1.5285 | 15.32 | 520000 | 1.5075 | | 1.5246 | 15.91 | 540000 | 1.5034 | | 1.5217 | 16.5 | 560000 | 1.5029 | | 1.5191 | 17.09 | 580000 | 1.4995 | | 1.516 | 17.68 | 600000 | 1.4958 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0 - Datasets 2.2.2 - Tokenizers 0.12.1
ChoboAvenger/DialoGPT-small-joshua
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - conversational - chatbot --- # Rogers DailoGPT Model
ChrisVCB/DialoGPT-medium-cmjs
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1757 - F1: 0.8513 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2986 | 1.0 | 835 | 0.1939 | 0.8077 | | 0.1547 | 2.0 | 1670 | 0.1813 | 0.8351 | | 0.1003 | 3.0 | 2505 | 0.1757 | 0.8513 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Ci/Pai
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en tags: - detr license: unknown datasets: - PubTables-1M --- # The models are taken from https://github.com/microsoft/table-transformer/ # Original model now on MSFT org: https://huggingface.co/microsoft/table-transformer-detection I have built a HuggingFace Space: https://huggingface.co/spaces/SalML/TableTransformer2CSV It runs an OCR on the table-transformer output image to obtain a CSV downloadable table.
Cilan/dalle-knockoff
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- ### lucky-luck on Stable Diffusion This is the `<lucky-luke>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<lucky-luke> 0](https://huggingface.co/sd-concepts-library/lucky-luck/resolve/main/concept_images/1.jpeg) ![<lucky-luke> 1](https://huggingface.co/sd-concepts-library/lucky-luck/resolve/main/concept_images/5.jpeg) ![<lucky-luke> 2](https://huggingface.co/sd-concepts-library/lucky-luck/resolve/main/concept_images/7.jpeg) ![<lucky-luke> 3](https://huggingface.co/sd-concepts-library/lucky-luck/resolve/main/concept_images/3.jpeg) ![<lucky-luke> 4](https://huggingface.co/sd-concepts-library/lucky-luck/resolve/main/concept_images/2.jpeg) ![<lucky-luke> 5](https://huggingface.co/sd-concepts-library/lucky-luck/resolve/main/concept_images/6.jpeg) ![<lucky-luke> 6](https://huggingface.co/sd-concepts-library/lucky-luck/resolve/main/concept_images/0.jpeg) ![<lucky-luke> 7](https://huggingface.co/sd-concepts-library/lucky-luck/resolve/main/concept_images/4.jpeg)
Clarianliz30/Caitlyn
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en --- <p align="center"> <img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%"> </p> **Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch** ## Task: recognition https://github.com/mindee/doctr ### Example usage: ```python >>> from doctr.io import DocumentFile >>> from doctr.models import ocr_predictor, from_hub >>> img = DocumentFile.from_images(['<image_path>']) >>> # Load your model from the hub >>> model = from_hub('mindee/my-model') >>> # Pass it to the predictor >>> # If your model is a recognition model: >>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large', >>> reco_arch=model, >>> pretrained=True) >>> # If your model is a detection model: >>> predictor = ocr_predictor(det_arch=model, >>> reco_arch='crnn_mobilenet_v3_small', >>> pretrained=True) >>> # Get your predictions >>> res = predictor(img) ```
CleveGreen/FieldClassifier
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
34
2022-09-09T12:28:47Z
--- language: en --- <p align="center"> <img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%"> </p> **Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch** ## Task: recognition https://github.com/mindee/doctr ### Example usage: ```python >>> from doctr.io import DocumentFile >>> from doctr.models import ocr_predictor, from_hub >>> img = DocumentFile.from_images(['<image_path>']) >>> # Load your model from the hub >>> model = from_hub('mindee/my-model') >>> # Pass it to the predictor >>> # If your model is a recognition model: >>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large', >>> reco_arch=model, >>> pretrained=True) >>> # If your model is a detection model: >>> predictor = ocr_predictor(det_arch=model, >>> reco_arch='crnn_mobilenet_v3_small', >>> pretrained=True) >>> # Get your predictions >>> res = predictor(img) ```
CleveGreen/FieldClassifier_v2
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
46
null
--- tags: - autotrain - vision - image-classification datasets: - dhruv0808/autotrain-data-ad_detection_ver_1 widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace co2_eq_emissions: emissions: 0.009652698067986935 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1395053127 - CO2 Emissions (in grams): 0.0097 ## Validation Metrics - Loss: 0.178 - Accuracy: 0.941 - Precision: 0.947 - Recall: 0.947 - AUC: 0.974 - F1: 0.947
CohleM/mbert-nepali-tokenizer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-09T14:15:36Z
--- language: - ko tags: - pytorch - causal-lm license: apache-2.0 --- # Polyglot-Ko-3.8B ## Model Description Polyglot-Ko is a series of large-scale Korean autoregressive language models made by the EleutherAI polyglot team. | Hyperparameter | Value | |----------------------|----------------------------------------------------------------------------------------------------------------------------------------| | \\(n_{parameters}\\) | 3,809,974,272 | | \\(n_{layers}\\) | 32 | | \\(d_{model}\\) | 3,072 | | \\(d_{ff}\\) | 12,288 | | \\(n_{heads}\\) | 24 | | \\(d_{head}\\) | 128 | | \\(n_{ctx}\\) | 2,048 | | \\(n_{vocab}\\) | 30,003 / 30,080 | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | The model consists of 32 transformer layers with a model dimension of 3072, and a feedforward dimension of 12288. The model dimension is split into 24 heads, each with a dimension of 128. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 30003. ## Training data Polyglot-Ko-3.8B was trained on 863 GB of Korean language data (1.2TB before processing), a large-scale dataset curated by [TUNiB](https://tunib.ai/). The data collection process has abided by South Korean laws. This dataset was collected for the purpose of training Polyglot-Ko models, so it will not be released for public use. | Source |Size (GB) | Link | |-------------------------------------|---------|------------------------------------------| | Korean blog posts | 682.3 | - | | Korean news dataset | 87.0 | - | | Modu corpus | 26.4 |corpus.korean.go.kr | | Korean patent dataset | 19.0 | - | | Korean Q & A dataset | 18.1 | - | | KcBert dataset | 12.7 | github.com/Beomi/KcBERT | | Korean fiction dataset | 6.1 | - | | Korean online comments | 4.2 | - | | Korean wikipedia | 1.4 | ko.wikipedia.org | | Clova call | < 1.0 | github.com/clovaai/ClovaCall | | Naver sentiment movie corpus | < 1.0 | github.com/e9t/nsmc | | Korean hate speech dataset | < 1.0 | - | | Open subtitles | < 1.0 | opus.nlpl.eu/OpenSubtitles.php | | AIHub various tasks datasets | < 1.0 |aihub.or.kr | | Standard Korean language dictionary | < 1.0 | stdict.korean.go.kr/main/main.do | Furthermore, in order to avoid the model memorizing and generating personally identifiable information (PII) in the training data, we masked out the following sensitive information in the pre-processing stage: * `<|acc|>` : bank account number * `<|rrn|>` : resident registration number * `<|tell|>` : phone number ## Training procedure Polyglot-Ko-3.8B was trained for 219 billion tokens over 105,000 steps on 256 A100 GPUs with the [GPT-NeoX framework](https://github.com/EleutherAI/gpt-neox). It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token. ## How to use This model can be easily loaded using the `AutoModelForCausalLM` class: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("EleutherAI/polyglot-ko-3.8b") model = AutoModelForCausalLM.from_pretrained("EleutherAI/polyglot-ko-3.8b") ``` ## Evaluation results We evaluate Polyglot-Ko-3.8B on [KOBEST dataset](https://arxiv.org/abs/2204.04541), a benchmark with 5 downstream tasks, against comparable models such as skt/ko-gpt-trinity-1.2B-v0.5, kakaobrain/kogpt and facebook/xglm-7.5B, using the prompts provided in the paper. The following tables show the results when the number of few-shot examples differ. You can reproduce these results using the [polyglot branch of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot) and the following scripts. For a fair comparison, all models were run under the same conditions and using the same prompts. In the tables, `n` refers to the number of few-shot examples. ```console python main.py \ --model gpt2 \ --model_args pretrained='EleutherAI/polyglot-ko-3.8b' \ --tasks kobest_copa,kobest_hellaswag \ --num_fewshot $YOUR_NUM_FEWSHOT \ --batch_size $YOUR_BATCH_SIZE \ --device $YOUR_DEVICE \ --output_path $/path/to/output/ ``` ### COPA (F1) | Model | params | n=0 | n=5 | n=10 | n=50 | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.6696 | 0.6477 | 0.6419 | 0.6514 | | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.7345 | 0.7287 | 0.7277 | 0.7479 | | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.6723 | 0.6731 | 0.6769 | 0.7119 | | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.7196 | 0.7193 | 0.7204 | 0.7206 | | **[EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this)** | **3.8B** | **0.7595** | **0.7608** | **0.7638** | **0.7788** | | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.7745 | 0.7676 | 0.7775 | 0.7887 | | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.7937 | 0.8108 | 0.8037 | 0.8369 | <img src="https://user-images.githubusercontent.com/19511788/233820235-6f617932-3b18-4534-be14-8df9e80b8a06.jpg" width="1000px"> ### HellaSwag (F1) | Model | params |n=0 | n=5 | n=10 | n=50 | |------------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.5243 | 0.5272 | 0.5166 | 0.5352 | | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.5590 | 0.5833 | 0.5828 | 0.5907 | | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.5665 | 0.5689 | 0.5565 | 0.5622 | | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.5247 | 0.5260 | 0.5278 | 0.5427 | | **[EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this)** | **3.8B** | **0.5707** | **0.5830** | **0.5670** | **0.5787** | | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.5976 | 0.5998 | 0.5979 | 0.6208 | | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.5954 | 0.6306 | 0.6098 | 0.6118 | <img src="https://user-images.githubusercontent.com/19511788/233820233-0127983e-4b37-48ce-89e5-51509ed9b1f2.jpg" width="1000px"> ## Limitations and Biases Polyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content. ## Citation and Related Information ### BibTeX entry If you find our work useful, please consider citing: ```bibtex @misc{polyglot-ko, title = {{Polyglot-Ko: Open-Source Korean Autoregressive Language Model}}, author = {Ko, Hyunwoong and Yang, Kichang and Ryu, Minho and Choi, Taekyoon and Yang, Seungmu and Hyun, jiwung and Park, Sungho}, url = {https://www.github.com/eleutherai/polyglot}, month = {9}, year = {2022}, } ``` ### Licensing All our models are licensed under the terms of the Apache License 2.0. ``` Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` ### Acknowledgement This project was made possible thanks to the computing resources from [Stability.ai](https://stability.ai), and thanks to [TUNiB](https://tunib.ai) for providing a large-scale Korean dataset for this work.
Coldestadam/Breakout_Mentors_SpongeBob_Model
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2022-09-09T14:15:55Z
--- tags: - autotrain - vision - image-classification datasets: - DominikB/autotrain-data-person-classifier widget: - src: https://100-pics.net/images/answers/de/schauspieler/schauspieler_22135_191026.jpeg example_title: Jack Black 1 - src: https://assets.rebelmouse.io/eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjE1MTE5NS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTcxNzUyMDE1MX0.JN64_PUw8Ldz5QZ5DV9ZGZ5VgO6x9nEFqhGFvc6sKMY/img.jpg?width=1200&height=600&coordinates=0%2C408%2C0%2C408 example_title: Jack Black 2 - src: https://nationaltoday.com/wp-content/uploads/2022/05/107-Johnny-Depp.jpg example_title: Johnny Depp 1 - src: https://de.web.img2.acsta.net/newsv7/22/09/08/09/10/3547575.jpg example_title: Johnny Depp 2 co2_eq_emissions: emissions: 0.0143182831771501 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1401653210 - CO2 Emissions (in grams): 0.0143 ## Validation Metrics - Loss: 0.000 - Accuracy: 1.000 - Precision: 1.000 - Recall: 1.000 - AUC: 1.000 - F1: 1.000
ComCom/gpt2
[ "pytorch", "gpt2", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "GPT2Model" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- license: mit --- ### Russian on Stable Diffusion This is the `<Russian>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<Russian> 0](https://huggingface.co/sd-concepts-library/russian/resolve/main/concept_images/1.jpeg) ![<Russian> 1](https://huggingface.co/sd-concepts-library/russian/resolve/main/concept_images/3.jpeg) ![<Russian> 2](https://huggingface.co/sd-concepts-library/russian/resolve/main/concept_images/2.jpeg) ![<Russian> 3](https://huggingface.co/sd-concepts-library/russian/resolve/main/concept_images/0.jpeg) ![<Russian> 4](https://huggingface.co/sd-concepts-library/russian/resolve/main/concept_images/4.jpeg)
ComCom-Dev/gpt2-bible-test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - decisionTransformer - deep reinforcement datasets: - edbeeching/decision_transformer_gym_replay license: - mit --- ### Running training - Num examples = 1000 - Num Epochs = 120 - Instantaneous batch size per device = 64 - Total train batch size = 64 - Gradient Accumulation steps = 1 - Total optimization steps = 1920 ### Train Output - global_step = 1920 - train_runtime = 1849.2158 - train_samples_per_second = 64.892 - train_steps_per_second = 1.038 - train_loss = 0.04717305501302083 - epoch = 120.0 ### Dataset - edbeeching/decision_transformer_gym_replay - halfcheetah-expert-v2
Connor-tech/bert_cn_finetuning
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
2022-09-09T14:53:35Z
--- thumbnail: "https://repository-images.githubusercontent.com/523487884/fdb03a69-8353-4387-b5fc-0d85f888a63f" datasets: - ChristophSchuhmann/improved_aesthetics_6plus license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - image-to-image --- # Stable Diffusion Image Variations Model Card 📣 V2 model released, and blurriness issues fixed! 📣 🧨🎉 Image Variations is now natively supported in 🤗 Diffusers! 🎉🧨 ![](https://raw.githubusercontent.com/justinpinkney/stable-diffusion/main/assets/im-vars-thin.jpg) ## Version 2 This version of Stable Diffusion has been fine tuned from [CompVis/stable-diffusion-v1-4-original](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original) to accept CLIP image embedding rather than text embeddings. This allows the creation of "image variations" similar to DALLE-2 using Stable Diffusion. This version of the weights has been ported to huggingface Diffusers, to use this with the Diffusers library requires the [Lambda Diffusers repo](https://github.com/LambdaLabsML/lambda-diffusers). This model was trained in two stages and longer than the original variations model and gives better image quality and better CLIP rated similarity compared to the original version See training details and v1 vs v2 comparison below. ## Example Make sure you are using a version of Diffusers >=0.8.0 (for older version see the old instructions at the bottom of this model card) ```python from diffusers import StableDiffusionImageVariationPipeline from PIL import Image device = "cuda:0" sd_pipe = StableDiffusionImageVariationPipeline.from_pretrained( "lambdalabs/sd-image-variations-diffusers", revision="v2.0", ) sd_pipe = sd_pipe.to(device) im = Image.open("path/to/image.jpg") tform = transforms.Compose([ transforms.ToTensor(), transforms.Resize( (224, 224), interpolation=transforms.InterpolationMode.BICUBIC, antialias=False, ), transforms.Normalize( [0.48145466, 0.4578275, 0.40821073], [0.26862954, 0.26130258, 0.27577711]), ]) inp = tform(im).to(device).unsqueeze(0) out = sd_pipe(inp, guidance_scale=3) out["images"][0].save("result.jpg") ``` ### The importance of resizing correctly... (or not) Note that due a bit of an oversight during training, the model expects resized images without anti-aliasing. This turns out to make a big difference and is important to do the resizing the same way during inference. When passing a PIL image to the Diffusers pipeline antialiasing will be applied during resize, so it's better to input a tensor which you have prepared manually according to the transfrom in the example above! Here are examples of images generated without (top) and with (bottom) anti-aliasing during resize. (Input is [this image](https://github.com/SHI-Labs/Versatile-Diffusion/blob/master/assets/ghibli.jpg)) ![](alias-montage.jpg) ![](default-montage.jpg) ### V1 vs V2 Here's an example of V1 vs V2, version two was trained more carefully and for longer, see the details below. V2-top vs V1-bottom ![](v2-montage.jpg) ![](v1-montage.jpg) Input images: ![](inputs.jpg) One important thing to note is that due to the longer training V2 appears to have memorised some common images from the training data, e.g. now the previous example of the Girl with a Pearl Earring almosts perfectly reproduce the original rather than creating variations. You can always use v1 by specifiying `revision="v1.0"`. v2 output for girl with a pearl earing as input (guidance scale=3) ![](earring.jpg) # Training **Training Procedure** This model is fine tuned from Stable Diffusion v1-3 where the text encoder has been replaced with an image encoder. The training procedure is the same as for Stable Diffusion except for the fact that images are encoded through a ViT-L/14 image-encoder including the final projection layer to the CLIP shared embedding space. The model was trained on LAION improved aesthetics 6plus. - **Hardware:** 8 x A100-40GB GPUs (provided by [Lambda GPU Cloud](https://lambdalabs.com/service/gpu-cloud)) - **Optimizer:** AdamW - **Stage 1** - Fine tune only CrossAttention layer weights from Stable Diffusion v1.4 model - **Steps**: 46,000 - **Batch:** batch size=4, GPUs=8, Gradient Accumulations=4. Total batch size=128 - **Learning rate:** warmup to 1e-5 for 10,000 steps and then kept constant - **Stage 2** - Resume from Stage 1 training the whole unet - **Steps**: 50,000 - **Batch:** batch size=4, GPUs=8, Gradient Accumulations=5. Total batch size=160 - **Learning rate:** warmup to 1e-5 for 5,000 steps and then kept constant Training was done using a [modified version of the original Stable Diffusion training code](https://github.com/justinpinkney/stable-diffusion). # Uses _The following section is adapted from the [Stable Diffusion model card](https://huggingface.co/CompVis/stable-diffusion-v1-4)_ ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. ### Safety Module The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers. This checker works by checking model outputs against known hard-coded NSFW concepts. The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter. Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPModel` *after generation* of the images. The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept. ## Old instructions If you are using a diffusers version <0.8.0 there is no `StableDiffusionImageVariationPipeline`, in this case you need to use an older revision (`2ddbd90b14bc5892c19925b15185e561bc8e5d0a`) in conjunction with the lambda-diffusers repo: First clone [Lambda Diffusers](https://github.com/LambdaLabsML/lambda-diffusers) and install any requirements (in a virtual environment in the example below): ```bash git clone https://github.com/LambdaLabsML/lambda-diffusers.git cd lambda-diffusers python -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` Then run the following python code: ```python from pathlib import Path from lambda_diffusers import StableDiffusionImageEmbedPipeline from PIL import Image import torch device = "cuda" if torch.cuda.is_available() else "cpu" pipe = StableDiffusionImageEmbedPipeline.from_pretrained( "lambdalabs/sd-image-variations-diffusers", revision="2ddbd90b14bc5892c19925b15185e561bc8e5d0a", ) pipe = pipe.to(device) im = Image.open("your/input/image/here.jpg") num_samples = 4 image = pipe(num_samples*[im], guidance_scale=3.0) image = image["sample"] base_path = Path("outputs/im2im") base_path.mkdir(exist_ok=True, parents=True) for idx, im in enumerate(image): im.save(base_path/f"{idx:06}.jpg") ``` *This model card was written by: Justin Pinkney and is based on the [Stable Diffusion model card](https://huggingface.co/CompVis/stable-diffusion-v1-4).*
Connorvr/TeachingGen
[ "pytorch", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2022-09-09T15:30:08Z
--- license: mit --- ### Tony DiTerlizzi's Planescape Art on Stable Diffusion This is the `<tony-diterlizzi-planescape>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<tony-diterlizzi-planescape> 0](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/1.jpeg) ![<tony-diterlizzi-planescape> 1](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/11.jpeg) ![<tony-diterlizzi-planescape> 2](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/8.jpeg) ![<tony-diterlizzi-planescape> 3](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/22.jpeg) ![<tony-diterlizzi-planescape> 4](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/19.jpeg) ![<tony-diterlizzi-planescape> 5](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/20.jpeg) ![<tony-diterlizzi-planescape> 6](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/25.jpeg) ![<tony-diterlizzi-planescape> 7](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/5.jpeg) ![<tony-diterlizzi-planescape> 8](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/9.jpeg) ![<tony-diterlizzi-planescape> 9](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/21.jpeg) ![<tony-diterlizzi-planescape> 10](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/24.jpeg) ![<tony-diterlizzi-planescape> 11](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/7.jpeg) ![<tony-diterlizzi-planescape> 12](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/3.jpeg) ![<tony-diterlizzi-planescape> 13](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/2.jpeg) ![<tony-diterlizzi-planescape> 14](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/6.jpeg) ![<tony-diterlizzi-planescape> 15](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/10.jpeg) ![<tony-diterlizzi-planescape> 16](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/18.jpeg) ![<tony-diterlizzi-planescape> 17](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/0.jpeg) ![<tony-diterlizzi-planescape> 18](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/14.jpeg) ![<tony-diterlizzi-planescape> 19](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/16.jpeg) ![<tony-diterlizzi-planescape> 20](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/17.jpeg) ![<tony-diterlizzi-planescape> 21](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/13.jpeg) ![<tony-diterlizzi-planescape> 22](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/4.jpeg) ![<tony-diterlizzi-planescape> 23](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/23.jpeg) ![<tony-diterlizzi-planescape> 24](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/12.jpeg) ![<tony-diterlizzi-planescape> 25](https://huggingface.co/sd-concepts-library/tony-diterlizzi-s-planescape-art/resolve/main/concept_images/15.jpeg)
ConstellationBoi/Oop
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- ### Moeb Style on Stable Diffusion This is the `<moe-bius>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<moe-bius> 0](https://huggingface.co/sd-concepts-library/moeb-style/resolve/main/concept_images/1.jpeg) ![<moe-bius> 1](https://huggingface.co/sd-concepts-library/moeb-style/resolve/main/concept_images/3.jpeg) ![<moe-bius> 2](https://huggingface.co/sd-concepts-library/moeb-style/resolve/main/concept_images/2.jpeg) ![<moe-bius> 3](https://huggingface.co/sd-concepts-library/moeb-style/resolve/main/concept_images/0.jpeg)
Corvus/DialoGPT-medium-CaptainPrice-Extended
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2022-09-09T17:32:38Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: distilbert-amazon-shoe-reviews results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-amazon-shoe-reviews This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9532 - Accuracy: 0.5779 - F1: [0.62616119 0.46456105 0.50993865 0.55755123 0.734375 ] - Precision: [0.62757927 0.46676662 0.49148534 0.58430541 0.72415507] - Recall: [0.6247495 0.46237624 0.52983172 0.53313982 0.74488753] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------------------------------------------------:|:--------------------------------------------------------:|:--------------------------------------------------------:| | 0.9713 | 1.0 | 2813 | 0.9532 | 0.5779 | [0.62616119 0.46456105 0.50993865 0.55755123 0.734375 ] | [0.62757927 0.46676662 0.49148534 0.58430541 0.72415507] | [0.6247495 0.46237624 0.52983172 0.53313982 0.74488753] | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
Coyotl/DialoGPT-test3-arthurmorgan
[ "conversational" ]
conversational
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-09T18:49:07Z
--- license: mit --- ### scrap-style on Stable Diffusion This is the `<style-scrap>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<style-chewie> 0](https://huggingface.co/sd-concepts-library/scrap-style/resolve/main/concept_images/3.jpeg) ![<style-chewie> 1](https://huggingface.co/sd-concepts-library/scrap-style/resolve/main/concept_images/0.jpeg) ![<style-chewie> 2](https://huggingface.co/sd-concepts-library/scrap-style/resolve/main/concept_images/2.jpeg) ![<style-chewie> 3](https://huggingface.co/sd-concepts-library/scrap-style/resolve/main/concept_images/1.jpeg) ![<style-chewie> 4](https://huggingface.co/sd-concepts-library/scrap-style/resolve/main/concept_images/4.jpeg)
Craftified/Bob
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-09T19:00:20Z
--- license: mit --- ### tela lenca on Stable Diffusion This is the `<tela-lenca>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<tela-lenca> 0](https://huggingface.co/sd-concepts-library/tela-lenca/resolve/main/concept_images/1.jpeg) ![<tela-lenca> 1](https://huggingface.co/sd-concepts-library/tela-lenca/resolve/main/concept_images/3.jpeg) ![<tela-lenca> 2](https://huggingface.co/sd-concepts-library/tela-lenca/resolve/main/concept_images/2.jpeg) ![<tela-lenca> 3](https://huggingface.co/sd-concepts-library/tela-lenca/resolve/main/concept_images/0.jpeg)
Craig/mGqFiPhu
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "transformers", "license:apache-2.0" ]
feature-extraction
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-09T19:18:43Z
--- language: - en tags: - stable-diffusion - text-to-image license: bigscience-bloom-rail-1.0 inference: false --- https://huggingface.co/hakurei/waifu-diffusion This is just the EMA version of the model. Anything other than the model required for inference has been removed. This decreases the file size by ~3 gigabytes and allows less time to be spent downloading.
Craig/paraphrase-MiniLM-L6-v2
[ "pytorch", "bert", "arxiv:1908.10084", "sentence-transformers", "feature-extraction", "sentence-similarity", "transformers", "license:apache-2.0" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,026
2022-09-09T19:28:37Z
--- tags: - autotrain - vision - image-classification datasets: - monkseal555/autotrain-data-hurricane3 widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace co2_eq_emissions: emissions: 1.2190828686910384 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1415853436 - CO2 Emissions (in grams): 1.2191 ## Validation Metrics - Loss: 2.420 - Accuracy: 0.224 - Macro F1: 0.067 - Micro F1: 0.224 - Weighted F1: 0.147 - Macro Precision: 0.063 - Micro Precision: 0.224 - Weighted Precision: 0.148 - Macro Recall: 0.104 - Micro Recall: 0.224 - Weighted Recall: 0.224
CrayonShinchan/bart_fine_tune_test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-09T19:43:14Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: orhanxakarsu/turkishPoe-generation-1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # orhanxakarsu/turkishPoe-generation-1 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 6.7319 - Validation Loss: 5.8020 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 12731, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.003} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 6.7319 | 5.8020 | 0 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.6.4 - Datasets 2.1.0 - Tokenizers 0.12.1
Crispy/dialopt-small-kratos
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- ### shu doll on Stable Diffusion This is the `<shu-doll>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<shu-doll> 0](https://huggingface.co/sd-concepts-library/shu-doll/resolve/main/concept_images/3.jpeg) ![<shu-doll> 1](https://huggingface.co/sd-concepts-library/shu-doll/resolve/main/concept_images/0.jpeg) ![<shu-doll> 2](https://huggingface.co/sd-concepts-library/shu-doll/resolve/main/concept_images/2.jpeg) ![<shu-doll> 3](https://huggingface.co/sd-concepts-library/shu-doll/resolve/main/concept_images/1.jpeg)
Crystal/distilbert-base-uncased-finetuned-squad
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/Infill3") model = AutoModelForCausalLM.from_pretrained("BigSalmon/Infill3") ``` ``` Demo: https://huggingface.co/spaces/BigSalmon/FormalInformalConciseWordy ``` ``` prompt = """few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep]""" input_ids = tokenizer.encode(prompt, return_tensors='pt') outputs = model.generate(input_ids=input_ids, max_length=10 + len(prompt), temperature=1.0, top_k=50, top_p=0.95, do_sample=True, num_return_sequences=5, early_stopping=True) for i in range(5): print(tokenizer.decode(outputs[i])) ``` Most likely outputs (Disclaimer: I highly recommend using this over just generating): ``` prompt = """few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep]""" text = tokenizer.encode(prompt) myinput, past_key_values = torch.tensor([text]), None myinput = myinput myinput= myinput.to(device) logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False) logits = logits[0,-1] probabilities = torch.nn.functional.softmax(logits) best_logits, best_indices = logits.topk(250) best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] text.append(best_indices[0].item()) best_probabilities = probabilities[best_indices].tolist() words = [] print(best_words) ``` Infill / Infilling / Masking / Phrase Masking ``` his contention [blank] by the evidence [sep] was refuted [answer] *** few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep] synonymous with [answer] *** when rick won the lottery, all of his distant relatives [blank] his winnings [sep] clamored for [answer] *** the library’s quiet atmosphere encourages visitors to [blank] in their work [sep] immerse themselves [answer] *** ``` ``` original: Other film stars to have appeared in Scrubs include Heather Graham, while Friends actor Matthew Perry has guest-starred and directed an episode of the [MASK] star, who recently played the title role in historical blockbuster Alexander, will make a cameo appearance as an unruly Irishman. Its leading star, Zach Braff, has recently [MASK] the big screen in Garden State, which he also directed. Farrell is pencilled in to [MASK] of Crockett in a film version of 1980s police [MASK] Farrell's appearance is said to be a result of his friendship with Zach Braff, who stars in the programme. infill: Other film stars to have appeared in Scrubs include Heather Graham, while Friends actor Matthew Perry has guest-starred and directed an episode of the show. The film star, who recently played the title role in historical blockbuster Alexander, will make a cameo appearance as an unruly Irishman. Its leading star, Zach Braff, has recently been seen on the big screen in Garden State, which he also directed. Farrell is pencilled in to play the role of Crockett in a film version of 1980s police drama Miami Vice. Farrell's appearance is said to be a result of his friendship with Zach Braff, who stars in the programme. ``` ``` <Prefix> chancing upon a linux user <Prefix> <Suffix> in the present day <Suffix> <Middle> is a rare occurrence <Middle> <Prefix> the mlb lockout has prompted <Prefix> <Suffix> a resolution <Suffix> <Middle> fans to long for <Middle> ```
Culmenus/opus-mt-de-is-finetuned-de-to-is_ekkicc
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- ### smw map on Stable Diffusion This is the `<smw-map>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<smw-map> 0](https://huggingface.co/sd-concepts-library/smw-map/resolve/main/concept_images/5.jpeg) ![<smw-map> 1](https://huggingface.co/sd-concepts-library/smw-map/resolve/main/concept_images/6.jpeg) ![<smw-map> 2](https://huggingface.co/sd-concepts-library/smw-map/resolve/main/concept_images/3.jpeg) ![<smw-map> 3](https://huggingface.co/sd-concepts-library/smw-map/resolve/main/concept_images/0.jpeg) ![<smw-map> 4](https://huggingface.co/sd-concepts-library/smw-map/resolve/main/concept_images/2.jpeg) ![<smw-map> 5](https://huggingface.co/sd-concepts-library/smw-map/resolve/main/concept_images/7.jpeg) ![<smw-map> 6](https://huggingface.co/sd-concepts-library/smw-map/resolve/main/concept_images/1.jpeg) ![<smw-map> 7](https://huggingface.co/sd-concepts-library/smw-map/resolve/main/concept_images/4.jpeg) ![<smw-map> 8](https://huggingface.co/sd-concepts-library/smw-map/resolve/main/concept_images/8.jpeg)
CuongLD/wav2vec2-large-xlsr-vietnamese
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "vi", "dataset:common_voice, infore_25h", "arxiv:2006.11477", "arxiv:2006.13979", "transformers", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 248.82 +/- 21.76 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CurtisASmith/GPT-JRT
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-prompt-e-nce-classification-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8159126984126984 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5213903743315508 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5222551928783383 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6292384658143413 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.768 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4649122807017544 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5277777777777778 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9121591080307367 - name: F1 (macro) type: f1_macro value: 0.9078493464517976 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8328638497652581 - name: F1 (macro) type: f1_macro value: 0.643974348342842 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.652762730227519 - name: F1 (macro) type: f1_macro value: 0.6418800744019266 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9641093413090353 - name: F1 (macro) type: f1_macro value: 0.889375508685358 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8827953619554998 - name: F1 (macro) type: f1_macro value: 0.8807348541974301 --- # relbert/roberta-large-semeval2012-average-prompt-e-nce-classification-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-nce-classification-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5213903743315508 - Accuracy on SAT: 0.5222551928783383 - Accuracy on BATS: 0.6292384658143413 - Accuracy on U2: 0.4649122807017544 - Accuracy on U4: 0.5277777777777778 - Accuracy on Google: 0.768 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-nce-classification-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9121591080307367 - Micro F1 score on CogALexV: 0.8328638497652581 - Micro F1 score on EVALution: 0.652762730227519 - Micro F1 score on K&H+N: 0.9641093413090353 - Micro F1 score on ROOT09: 0.8827953619554998 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-nce-classification-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8159126984126984 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-e-nce-classification-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>’s <mask> - loss_function: nce_logout - classification_loss: True - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 30 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-nce-classification-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
CurtisBowser/DialoGPT-medium-sora-three
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- ### Erwin Olaf Style on Stable Diffusion This is the `<erwin-olaf>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<erwin-olaf> 0](https://huggingface.co/sd-concepts-library/erwin-olaf-style/resolve/main/concept_images/1.jpeg) ![<erwin-olaf> 1](https://huggingface.co/sd-concepts-library/erwin-olaf-style/resolve/main/concept_images/8.jpeg) ![<erwin-olaf> 2](https://huggingface.co/sd-concepts-library/erwin-olaf-style/resolve/main/concept_images/5.jpeg) ![<erwin-olaf> 3](https://huggingface.co/sd-concepts-library/erwin-olaf-style/resolve/main/concept_images/9.jpeg) ![<erwin-olaf> 4](https://huggingface.co/sd-concepts-library/erwin-olaf-style/resolve/main/concept_images/7.jpeg) ![<erwin-olaf> 5](https://huggingface.co/sd-concepts-library/erwin-olaf-style/resolve/main/concept_images/3.jpeg) ![<erwin-olaf> 6](https://huggingface.co/sd-concepts-library/erwin-olaf-style/resolve/main/concept_images/2.jpeg) ![<erwin-olaf> 7](https://huggingface.co/sd-concepts-library/erwin-olaf-style/resolve/main/concept_images/6.jpeg) ![<erwin-olaf> 8](https://huggingface.co/sd-concepts-library/erwin-olaf-style/resolve/main/concept_images/10.jpeg) ![<erwin-olaf> 9](https://huggingface.co/sd-concepts-library/erwin-olaf-style/resolve/main/concept_images/0.jpeg) ![<erwin-olaf> 10](https://huggingface.co/sd-concepts-library/erwin-olaf-style/resolve/main/concept_images/4.jpeg)
CurtisBowser/DialoGPT-medium-sora-two
[ "pytorch", "conversational" ]
conversational
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- ### Maurice-Quentin- de-la-Tour-style on Stable Diffusion This is the `<maurice>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<maurice> 0](https://huggingface.co/sd-concepts-library/maurice-quentin-de-la-tour-style/resolve/main/concept_images/3.jpeg) ![<maurice> 1](https://huggingface.co/sd-concepts-library/maurice-quentin-de-la-tour-style/resolve/main/concept_images/0.jpeg) ![<maurice> 2](https://huggingface.co/sd-concepts-library/maurice-quentin-de-la-tour-style/resolve/main/concept_images/2.jpeg) ![<maurice> 3](https://huggingface.co/sd-concepts-library/maurice-quentin-de-la-tour-style/resolve/main/concept_images/1.jpeg)
CurtisBowser/DialoGPT-medium-sora
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- language: - en - ja - multilingual license: cc-by-4.0 tags: - translation - opus-mt-tc model-index: - name: opus-mt-tc-base-en-ja results: - task: type: translation name: Translation eng-jpg dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-jpg metrics: - type: bleu value: 15.2 name: BLEU --- # Opus Tatoeba English-Japanese *This model was obtained by running the script [convert_marian_to_pytorch.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/marian/convert_marian_to_pytorch.py). The original models were trained by [J�rg Tiedemann](https://blogs.helsinki.fi/tiedeman/) using the [MarianNMT](https://marian-nmt.github.io/) library. See all available `MarianMTModel` models on the profile of the [Helsinki NLP](https://huggingface.co/Helsinki-NLP) group.* * dataset: opus+bt * model: transformer-align * source language(s): eng * target language(s): jpn * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download: [opus+bt-2021-04-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.zip) * test set translations: [opus+bt-2021-04-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.test.txt) * test set scores: [opus+bt-2021-04-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.eval.txt) ## Benchmarks | testset | BLEU | chr-F | #sent | #words | BP | |---------|-------|-------|-------|--------|----| | Tatoeba-test.eng-jpn | 15.2 | 0.258 | 10000 | 99206 | 1.000 |
CurtisBowser/DialoGPT-small-sora
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2022-09-09T21:54:47Z
--- language: - en - nl - multilingual license: cc-by-4.0 tags: - translation - opus-mt-tc model-index: - name: opus-mt-tc-base-en-nl results: - task: type: translation name: Translation eng-mld dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-mld metrics: - type: bleu value: 57.5 name: BLEU --- # Opus Tatoeba English-Dutch *This model was obtained by running the script [convert_marian_to_pytorch.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/marian/convert_marian_to_pytorch.py). The original models were trained by [J�rg Tiedemann](https://blogs.helsinki.fi/tiedeman/) using the [MarianNMT](https://marian-nmt.github.io/) library. See all available `MarianMTModel` models on the profile of the [Helsinki NLP](https://huggingface.co/Helsinki-NLP) group.* * dataset: opus+bt * model: transformer-align * source language(s): eng * target language(s): nld * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download: [opus+bt-2021-04-14.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-nld/opus+bt-2021-04-14.zip) * test set translations: [opus+bt-2021-04-14.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-nld/opus+bt-2021-04-14.test.txt) * test set scores: [opus+bt-2021-04-14.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-nld/opus+bt-2021-04-14.eval.txt) ## Benchmarks | testset | BLEU | chr-F | #sent | #words | BP | |---------|-------|-------|-------|--------|----| | Tatoeba-test.eng-nld | 57.5 | 0.731 | 10000 | 71436 | 0.986 |
CyberMuffin/DialoGPT-small-ChandlerBot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: - en - de - multilingual license: cc-by-4.0 tags: - translation - opus-mt-tc model-index: - name: opus-mt-tc-base-en-de results: - task: type: translation name: Translation eng-deu dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-deu metrics: - type: bleu value: 43.7 name: BLEU --- # Opus Tatoeba English-German *This model was obtained by running the script [convert_marian_to_pytorch.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/marian/convert_marian_to_pytorch.py). The original models were trained by [J�rg Tiedemann](https://blogs.helsinki.fi/tiedeman/) using the [MarianNMT](https://marian-nmt.github.io/) library. See all available `MarianMTModel` models on the profile of the [Helsinki NLP](https://huggingface.co/Helsinki-NLP) group.* * dataset: opusTCv20210807+bt * model: transformer-big * source language(s): eng * target language(s): deu * raw source language(s): eng * raw target language(s): deu * model: transformer-big * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download: [opusTCv20210807+bt-2021-12-08.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-deu/opusTCv20210807+bt-2021-12-08.zip) * test set translations: [opusTCv20210807+bt-2021-12-08.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-deu/opusTCv20210807+bt-2021-12-08.test.txt) * test set scores: [opusTCv20210807+bt-2021-12-08.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-deu/opusTCv20210807+bt-2021-12-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | #sent | #words | BP | |---------|-------|-------|-------|--------|----| | newssyscomb2009.eng-deu | 24.3 | 0.5462 | 502 | 11271 | 0.993 | | news-test2008.eng-deu | 24.7 | 0.5412 | 2051 | 47427 | 1.000 | | newstest2009.eng-deu | 23.6 | 0.5385 | 2525 | 62816 | 0.999 | | newstest2010.eng-deu | 26.9 | 0.5589 | 2489 | 61511 | 0.966 | | newstest2011.eng-deu | 24.1 | 0.5364 | 3003 | 72981 | 0.990 | | newstest2012.eng-deu | 24.6 | 0.5375 | 3003 | 72886 | 0.972 | | newstest2013.eng-deu | 28.3 | 0.5636 | 3000 | 63737 | 0.988 | | newstest2014-deen.eng-deu | 30.9 | 0.6084 | 3003 | 62964 | 1.000 | | newstest2015-ende.eng-deu | 33.2 | 0.6106 | 2169 | 44260 | 1.000 | | newstest2016-ende.eng-deu | 39.8 | 0.6595 | 2999 | 62670 | 0.993 | | newstest2017-ende.eng-deu | 32.0 | 0.6047 | 3004 | 61291 | 1.000 | | newstest2018-ende.eng-deu | 48.8 | 0.7146 | 2998 | 64276 | 1.000 | | newstest2019-ende.eng-deu | 45.0 | 0.6821 | 1997 | 48969 | 0.995 | | Tatoeba-test-v2021-08-07.eng-deu | 43.7 | 0.6442 | 10000 | 85728 | 1.000 |
Czapla/Rick
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
# roberta-psych --- language: en --- This is a [RoBERTa](https://arxiv.org/pdf/1907.11692.pdf) model pretrained on Alexander Street Database of Counselling and Psychotherapy Transcripts (see more about database and its content [here](https://alexanderstreet.com/products/counseling-and-psychotherapy-transcripts-series)). Further information about training, parameters and evaluation is available in our paper: Laricheva, M., Zhang, C., Liu, Y., Chen, G., Tracey, T., Young, R., & Carenini, G. (2022). [Automated Utterance Labeling of Conversations Using Natural Language Processing.](https://arxiv.org/abs/2208.06525) arXiv preprint arXiv:2208.06525 --- license: cc-by-nc-sa-2.0 ---
D3vil/DialoGPT-smaall-harrypotter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- ### Drive scorpion jacket on Stable Diffusion This is the `<drive-scorpion-jacket>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<drive-scorpion-jacket> 0](https://huggingface.co/sd-concepts-library/drive-scorpion-jacket/resolve/main/concept_images/5.jpeg) ![<drive-scorpion-jacket> 1](https://huggingface.co/sd-concepts-library/drive-scorpion-jacket/resolve/main/concept_images/3.jpeg) ![<drive-scorpion-jacket> 2](https://huggingface.co/sd-concepts-library/drive-scorpion-jacket/resolve/main/concept_images/0.jpeg) ![<drive-scorpion-jacket> 3](https://huggingface.co/sd-concepts-library/drive-scorpion-jacket/resolve/main/concept_images/2.jpeg) ![<drive-scorpion-jacket> 4](https://huggingface.co/sd-concepts-library/drive-scorpion-jacket/resolve/main/concept_images/1.jpeg) ![<drive-scorpion-jacket> 5](https://huggingface.co/sd-concepts-library/drive-scorpion-jacket/resolve/main/concept_images/4.jpeg)
D3vil/DialoGPT-smaall-harrypottery
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en thumbnail: http://www.huggingtweets.com/frankdegods/1667103905913/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1579021841775009792/hUaDp-eu_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Frank</div> <div style="text-align: center; font-size: 14px;">@frankdegods</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Frank. | Data | Frank | | --- | --- | | Tweets downloaded | 3234 | | Retweets | 992 | | Short tweets | 539 | | Tweets kept | 1703 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3nov3knn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @frankdegods's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ld1rac8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ld1rac8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/frankdegods') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
D4RL1NG/yes
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- ### dark penguin pinguinanimations on Stable Diffusion This is the `<darkpenguin-robot>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<darkpenguin-robot> 0](https://huggingface.co/sd-concepts-library/dark-penguin-pinguinanimations/resolve/main/concept_images/5.jpeg) ![<darkpenguin-robot> 1](https://huggingface.co/sd-concepts-library/dark-penguin-pinguinanimations/resolve/main/concept_images/6.jpeg) ![<darkpenguin-robot> 2](https://huggingface.co/sd-concepts-library/dark-penguin-pinguinanimations/resolve/main/concept_images/3.jpeg) ![<darkpenguin-robot> 3](https://huggingface.co/sd-concepts-library/dark-penguin-pinguinanimations/resolve/main/concept_images/0.jpeg) ![<darkpenguin-robot> 4](https://huggingface.co/sd-concepts-library/dark-penguin-pinguinanimations/resolve/main/concept_images/2.jpeg) ![<darkpenguin-robot> 5](https://huggingface.co/sd-concepts-library/dark-penguin-pinguinanimations/resolve/main/concept_images/7.jpeg) ![<darkpenguin-robot> 6](https://huggingface.co/sd-concepts-library/dark-penguin-pinguinanimations/resolve/main/concept_images/1.jpeg) ![<darkpenguin-robot> 7](https://huggingface.co/sd-concepts-library/dark-penguin-pinguinanimations/resolve/main/concept_images/4.jpeg) ![<darkpenguin-robot> 8](https://huggingface.co/sd-concepts-library/dark-penguin-pinguinanimations/resolve/main/concept_images/8.jpeg)