modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
AnonymousSub/bert_hier_diff_equal_wts_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 240 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 240, "warmup_steps": 24, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
AnonymousSub/rule_based_roberta_hier_quadruplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- license: openrail --- ![ccstest.jpg](https://s3.amazonaws.com/moonup/production/uploads/1672765891559-63602a9f3605bd411c18b4e0.jpeg) cardcaptor sakura model trained on anime screenshots all were 768 resolution images. 16 batch size and 1.6e-5 lr. the number indicates the epoch. you can really only do sakura tomoyo. something like kinomoto sakura, white beret, school bag, tomeda elementary school uniform, happy should give ok results when added to a normal prompt.
AnonymousSub/unsup-consert-base
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - BreakoutNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: C51 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: BreakoutNoFrameskip-v4 type: BreakoutNoFrameskip-v4 metrics: - type: mean_reward value: 381.00 +/- 56.35 name: mean_reward verified: false --- # (CleanRL) **C51** Agent Playing **BreakoutNoFrameskip-v4** This is a trained model of a C51 agent playing BreakoutNoFrameskip-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/c51_atari.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[c51_atari]" python -m cleanrl_utils.enjoy --exp-name c51_atari --env-id BreakoutNoFrameskip-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/kinalmehta/BreakoutNoFrameskip-v4-c51_atari-seed1/raw/main/c51_atari.py curl -OL https://huggingface.co/kinalmehta/BreakoutNoFrameskip-v4-c51_atari-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/kinalmehta/BreakoutNoFrameskip-v4-c51_atari-seed1/raw/main/poetry.lock poetry install --all-extras python c51_atari.py --save-model --upload-model --hf-entity kinalmehta --env-id BreakoutNoFrameskip-v4 ``` # Hyperparameters ```python {'batch_size': 32, 'buffer_size': 1000000, 'capture_video': False, 'cuda': True, 'end_e': 0.01, 'env_id': 'BreakoutNoFrameskip-v4', 'exp_name': 'c51_atari', 'exploration_fraction': 0.1, 'gamma': 0.99, 'hf_entity': 'kinalmehta', 'learning_rate': 0.00025, 'learning_starts': 80000, 'n_atoms': 51, 'save_model': True, 'seed': 1, 'start_e': 1, 'target_network_frequency': 10000, 'torch_deterministic': True, 'total_timesteps': 10000000, 'track': False, 'train_frequency': 4, 'upload_model': True, 'v_max': 10, 'v_min': -10, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
AnthonyNelson/DialoGPT-small-ricksanchez
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="BKluwe2209/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Anthos23/distilbert-base-uncased-finetuned-sst2
[ "tf", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_keras_callback", "license:apache-2.0" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
21
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-base-timit-finetune-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-finetune-3 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4387 - Wer: 0.2750 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4692 | 6.94 | 500 | 0.9978 | 0.7091 | | 0.3946 | 13.88 | 1000 | 0.3674 | 0.3253 | | 0.1291 | 20.83 | 1500 | 0.3987 | 0.3042 | | 0.074 | 27.77 | 2000 | 0.4292 | 0.2916 | | 0.0487 | 34.72 | 2500 | 0.4302 | 0.2853 | | 0.0368 | 41.66 | 3000 | 0.4222 | 0.2789 | | 0.0281 | 48.61 | 3500 | 0.4481 | 0.2783 | | 0.0237 | 55.55 | 4000 | 0.4387 | 0.2750 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1 - Datasets 2.7.0 - Tokenizers 0.11.0
Anthos23/sentiment-roberta-large-english-finetuned-sentiment-analysis
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Krystal-Test Dreambooth model trained by Slashy with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
AntonClaesson/movie-plot-generator
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: distilbart-podimo-data-eval-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbart-podimo-data-eval-3 This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3828 - Rouge1: 32.8203 - Rouge2: 7.8994 - Rougel: 18.9659 - Rougelsum: 29.4196 - Gen Len: 114.5264 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:| | 3.9049 | 1.0 | 132 | 3.5343 | 30.2542 | 6.031 | 17.269 | 26.9847 | 113.7689 | | 3.4248 | 2.0 | 264 | 3.4055 | 31.6518 | 7.2786 | 18.2641 | 28.4006 | 114.6547 | | 3.1594 | 3.0 | 396 | 3.3579 | 32.0442 | 7.3554 | 18.3492 | 28.7615 | 113.7443 | | 2.9645 | 4.0 | 528 | 3.3445 | 32.0945 | 7.637 | 18.6289 | 28.899 | 115.5321 | | 2.8073 | 5.0 | 660 | 3.3470 | 32.7852 | 7.9597 | 19.2358 | 29.5057 | 108.3519 | | 2.685 | 6.0 | 792 | 3.3532 | 32.3775 | 7.661 | 18.6719 | 28.9282 | 117.1104 | | 2.5941 | 7.0 | 924 | 3.3711 | 32.6976 | 7.8917 | 19.069 | 29.3785 | 113.1943 | | 2.5267 | 8.0 | 1056 | 3.3828 | 32.8203 | 7.8994 | 18.9659 | 29.4196 | 114.5264 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.11.0 - Datasets 2.2.1 - Tokenizers 0.12.1
Antony/mint_model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: xlm-roberta-base-language-detection-finetuned-ner-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-language-detection-finetuned-ner-finetuned-ner This model is a fine-tuned version of [carexl8/xlm-roberta-base-language-detection-finetuned-ner](https://huggingface.co/carexl8/xlm-roberta-base-language-detection-finetuned-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0001 - Precision: 1.0000 - Recall: 1.0000 - F1: 1.0000 - Accuracy: 1.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0007 | 1.0 | 1543 | 0.0001 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | | 0.0003 | 2.0 | 3086 | 0.0001 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Anubhav23/indianlegal
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-03T19:29:56Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 6.50 +/- 16.29 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tkurtulus -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tkurtulus -f logs/ rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga tkurtulus ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 100000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
gaurishhs/API
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
NOT COMPATIBLE WITH V1 BASED MODELS. i did this bc i was bored and i find writing ridiculous negative prompts funny and nobody had done it yet with wd 1.4. to say i was shocked at the results is an understatement. it can turn extremely simple prompts like "1girl" into masterpieces without having to actually say "masterpiece" or anything other than "by wd14neg" in negatives. so you can all stop complaining that wd 1.4 isn't as good as anything v3 now. i never rly share my stuff bc i hate attention but i just had to this time. trained on 500+ images generated with wd 1.4 anime from an extremely exaggerated negative prompt in positives (seriously it was like 2000 tokens) and quality enhancers in negatives. wd14neg is the best and can stand on its own, but may improve with wd14neg-2 and/or wd14neg-3. combining all of them may result in deepfrying. to solve this i would either lower your cfg or deemphasize it. the embedding may struggle with eliminating extra limbs and fingers since i mainly focused on blurriness, flat colors, and jpeg artifacts to fix embeddings of mine with crappy datasets i don't have the executive function to redo but i plan to make another embedding focusing on those. ![with](https://files.catbox.moe/mn5hye.png "with") ![without](https://files.catbox.moe/cy2wtw.png "without")
Apisate/DialoGPT-small-jordan
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
Access to model Antale123/ConorBot is restricted and you are not in the authorized list. Visit https://huggingface.co/Antale123/ConorBot to ask for access.
Apisate/Discord-Ai-Bot
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: gpt2-ner-invoiceSenderRecipient_all_inv_03_01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-ner-invoiceSenderRecipient_all_inv_03_01 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0307 - Precision: 0.7932 - Recall: 0.8488 - F1: 0.8201 - Accuracy: 0.9895 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0363 | 0.01 | 500 | 0.0338 | 0.7846 | 0.7969 | 0.7907 | 0.9884 | | 0.0392 | 0.02 | 1000 | 0.0346 | 0.7665 | 0.8211 | 0.7929 | 0.9881 | | 0.0363 | 0.04 | 1500 | 0.0347 | 0.7701 | 0.8075 | 0.7884 | 0.9880 | | 0.0396 | 0.05 | 2000 | 0.0347 | 0.7454 | 0.8375 | 0.7888 | 0.9879 | | 0.0366 | 0.06 | 2500 | 0.0350 | 0.7519 | 0.8345 | 0.7911 | 0.9879 | | 0.0382 | 0.07 | 3000 | 0.0356 | 0.7500 | 0.8434 | 0.7939 | 0.9877 | | 0.0424 | 0.09 | 3500 | 0.0358 | 0.7517 | 0.8287 | 0.7883 | 0.9877 | | 0.0385 | 0.1 | 4000 | 0.0352 | 0.7605 | 0.8225 | 0.7903 | 0.9880 | | 0.0382 | 0.11 | 4500 | 0.0361 | 0.7494 | 0.8159 | 0.7813 | 0.9874 | | 0.0372 | 0.12 | 5000 | 0.0345 | 0.7817 | 0.8044 | 0.7929 | 0.9885 | | 0.0377 | 0.14 | 5500 | 0.0346 | 0.7749 | 0.8238 | 0.7986 | 0.9884 | | 0.0383 | 0.15 | 6000 | 0.0359 | 0.7568 | 0.8341 | 0.7936 | 0.9879 | | 0.0372 | 0.16 | 6500 | 0.0356 | 0.7548 | 0.8356 | 0.7932 | 0.9879 | | 0.0371 | 0.17 | 7000 | 0.0352 | 0.7540 | 0.8477 | 0.7981 | 0.9880 | | 0.0368 | 0.19 | 7500 | 0.0349 | 0.7662 | 0.8310 | 0.7973 | 0.9881 | | 0.0388 | 0.2 | 8000 | 0.0339 | 0.7648 | 0.8336 | 0.7977 | 0.9883 | | 0.0368 | 0.21 | 8500 | 0.0336 | 0.7729 | 0.8305 | 0.8006 | 0.9886 | | 0.0389 | 0.22 | 9000 | 0.0340 | 0.7750 | 0.8208 | 0.7972 | 0.9884 | | 0.0384 | 0.24 | 9500 | 0.0349 | 0.7549 | 0.8499 | 0.7996 | 0.9880 | | 0.0376 | 0.25 | 10000 | 0.0358 | 0.7531 | 0.8390 | 0.7938 | 0.9875 | | 0.0354 | 0.26 | 10500 | 0.0346 | 0.7650 | 0.8318 | 0.7970 | 0.9882 | | 0.0358 | 0.27 | 11000 | 0.0338 | 0.7694 | 0.8397 | 0.8030 | 0.9886 | | 0.0389 | 0.28 | 11500 | 0.0341 | 0.7586 | 0.8502 | 0.8018 | 0.9882 | | 0.0383 | 0.3 | 12000 | 0.0342 | 0.7688 | 0.8275 | 0.7971 | 0.9881 | | 0.0355 | 0.31 | 12500 | 0.0337 | 0.7783 | 0.8281 | 0.8024 | 0.9885 | | 0.0372 | 0.32 | 13000 | 0.0338 | 0.7703 | 0.8399 | 0.8036 | 0.9884 | | 0.0369 | 0.33 | 13500 | 0.0331 | 0.7683 | 0.8427 | 0.8038 | 0.9886 | | 0.0361 | 0.35 | 14000 | 0.0336 | 0.7699 | 0.8322 | 0.7999 | 0.9885 | | 0.0361 | 0.36 | 14500 | 0.0336 | 0.7735 | 0.8390 | 0.8049 | 0.9885 | | 0.0372 | 0.37 | 15000 | 0.0333 | 0.7747 | 0.8343 | 0.8034 | 0.9887 | | 0.0366 | 0.38 | 15500 | 0.0343 | 0.7646 | 0.8468 | 0.8036 | 0.9883 | | 0.0345 | 0.4 | 16000 | 0.0333 | 0.7790 | 0.8334 | 0.8053 | 0.9887 | | 0.0363 | 0.41 | 16500 | 0.0329 | 0.7783 | 0.8301 | 0.8034 | 0.9887 | | 0.0348 | 0.42 | 17000 | 0.0341 | 0.7626 | 0.8533 | 0.8054 | 0.9884 | | 0.0391 | 0.43 | 17500 | 0.0324 | 0.7873 | 0.8295 | 0.8079 | 0.9889 | | 0.0344 | 0.45 | 18000 | 0.0334 | 0.7769 | 0.8369 | 0.8058 | 0.9887 | | 0.0378 | 0.46 | 18500 | 0.0337 | 0.7741 | 0.8394 | 0.8054 | 0.9886 | | 0.035 | 0.47 | 19000 | 0.0328 | 0.7827 | 0.8323 | 0.8067 | 0.9888 | | 0.0351 | 0.48 | 19500 | 0.0327 | 0.7815 | 0.8371 | 0.8083 | 0.9889 | | 0.037 | 0.5 | 20000 | 0.0328 | 0.7793 | 0.8388 | 0.8079 | 0.9888 | | 0.0346 | 0.51 | 20500 | 0.0325 | 0.7804 | 0.8416 | 0.8099 | 0.9890 | | 0.0364 | 0.52 | 21000 | 0.0323 | 0.7861 | 0.8339 | 0.8093 | 0.9889 | | 0.0356 | 0.53 | 21500 | 0.0327 | 0.7729 | 0.8510 | 0.8101 | 0.9889 | | 0.0346 | 0.54 | 22000 | 0.0325 | 0.7791 | 0.8407 | 0.8087 | 0.9889 | | 0.0342 | 0.56 | 22500 | 0.0334 | 0.7790 | 0.8443 | 0.8104 | 0.9889 | | 0.0368 | 0.57 | 23000 | 0.0322 | 0.7869 | 0.8323 | 0.8089 | 0.9890 | | 0.0371 | 0.58 | 23500 | 0.0320 | 0.7890 | 0.8356 | 0.8116 | 0.9891 | | 0.0344 | 0.59 | 24000 | 0.0321 | 0.7910 | 0.8321 | 0.8110 | 0.9892 | | 0.0342 | 0.61 | 24500 | 0.0319 | 0.7881 | 0.8356 | 0.8111 | 0.9892 | | 0.0339 | 0.62 | 25000 | 0.0320 | 0.7889 | 0.8317 | 0.8097 | 0.9892 | | 0.0347 | 0.63 | 25500 | 0.0316 | 0.7909 | 0.8347 | 0.8122 | 0.9892 | | 0.034 | 0.64 | 26000 | 0.0318 | 0.7887 | 0.8324 | 0.8100 | 0.9891 | | 0.0347 | 0.66 | 26500 | 0.0317 | 0.7791 | 0.8525 | 0.8141 | 0.9891 | | 0.0345 | 0.67 | 27000 | 0.0318 | 0.7870 | 0.8384 | 0.8119 | 0.9892 | | 0.0347 | 0.68 | 27500 | 0.0317 | 0.7903 | 0.8426 | 0.8157 | 0.9893 | | 0.0371 | 0.69 | 28000 | 0.0311 | 0.7965 | 0.8332 | 0.8144 | 0.9894 | | 0.0338 | 0.71 | 28500 | 0.0316 | 0.7863 | 0.8442 | 0.8142 | 0.9892 | | 0.0352 | 0.72 | 29000 | 0.0315 | 0.7810 | 0.8537 | 0.8157 | 0.9892 | | 0.0344 | 0.73 | 29500 | 0.0314 | 0.7953 | 0.8353 | 0.8148 | 0.9894 | | 0.0322 | 0.74 | 30000 | 0.0320 | 0.7836 | 0.8449 | 0.8131 | 0.9891 | | 0.0355 | 0.76 | 30500 | 0.0312 | 0.7877 | 0.8480 | 0.8167 | 0.9894 | | 0.035 | 0.77 | 31000 | 0.0313 | 0.7864 | 0.8504 | 0.8171 | 0.9893 | | 0.0346 | 0.78 | 31500 | 0.0310 | 0.7931 | 0.8424 | 0.8170 | 0.9895 | | 0.0339 | 0.79 | 32000 | 0.0316 | 0.7857 | 0.8501 | 0.8166 | 0.9893 | | 0.033 | 0.8 | 32500 | 0.0311 | 0.7975 | 0.8406 | 0.8185 | 0.9895 | | 0.0337 | 0.82 | 33000 | 0.0314 | 0.7886 | 0.8457 | 0.8162 | 0.9894 | | 0.0357 | 0.83 | 33500 | 0.0311 | 0.7923 | 0.8437 | 0.8172 | 0.9894 | | 0.0348 | 0.84 | 34000 | 0.0312 | 0.7909 | 0.8490 | 0.8189 | 0.9894 | | 0.0343 | 0.85 | 34500 | 0.0311 | 0.7856 | 0.8528 | 0.8179 | 0.9893 | | 0.0323 | 0.87 | 35000 | 0.0311 | 0.7884 | 0.8505 | 0.8183 | 0.9894 | | 0.0329 | 0.88 | 35500 | 0.0307 | 0.7981 | 0.8399 | 0.8185 | 0.9896 | | 0.0324 | 0.89 | 36000 | 0.0313 | 0.7830 | 0.8576 | 0.8186 | 0.9893 | | 0.0336 | 0.9 | 36500 | 0.0312 | 0.7836 | 0.8566 | 0.8185 | 0.9893 | | 0.0327 | 0.92 | 37000 | 0.0309 | 0.7887 | 0.8501 | 0.8182 | 0.9895 | | 0.0338 | 0.93 | 37500 | 0.0312 | 0.7887 | 0.8514 | 0.8188 | 0.9894 | | 0.0327 | 0.94 | 38000 | 0.0311 | 0.7873 | 0.8534 | 0.8190 | 0.9894 | | 0.0326 | 0.95 | 38500 | 0.0308 | 0.7953 | 0.8459 | 0.8198 | 0.9895 | | 0.0338 | 0.97 | 39000 | 0.0307 | 0.7932 | 0.8488 | 0.8201 | 0.9895 | | 0.0354 | 0.98 | 39500 | 0.0308 | 0.7916 | 0.8502 | 0.8198 | 0.9895 | | 0.0313 | 0.99 | 40000 | 0.0309 | 0.7897 | 0.8523 | 0.8198 | 0.9895 | ### Framework versions - Transformers 4.22.0 - Pytorch 1.10.0 - Tokenizers 0.12.1
ArBert/albert-base-v2-finetuned-ner-agglo-twitter
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 303 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 303, "warmup_steps": 31, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) (3): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
ArBert/albert-base-v2-finetuned-ner-agglo
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="amal94/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ArBert/albert-base-v2-finetuned-ner-gmm
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="amal94/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ArBert/bert-base-uncased-finetuned-ner-agglo
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-default results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.46 +/- 2.79 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="gmojko/Taxi-v3-default", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ArBert/bert-base-uncased-finetuned-ner-kmeans-twitter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - summarize_from_feedback metrics: - rouge model-index: - name: flan-t5-base-finetuned-openai-summarize_from_feedback results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: summarize_from_feedback type: summarize_from_feedback config: comparisons split: train args: comparisons metrics: - name: Rouge1 type: rouge value: 29.3494 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-finetuned-openai-summarize_from_feedback This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the summarize_from_feedback dataset. It achieves the following results on the evaluation set: - Loss: 1.8833 - Rouge1: 29.3494 - Rouge2: 10.9406 - Rougel: 23.9907 - Rougelsum: 25.461 - Gen Len: 18.9265 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.7678 | 1.0 | 5804 | 1.8833 | 29.3494 | 10.9406 | 23.9907 | 25.461 | 18.9265 | | 1.5839 | 2.0 | 11608 | 1.8992 | 29.6239 | 11.1795 | 24.2927 | 25.7183 | 18.9358 | | 1.4812 | 3.0 | 17412 | 1.8929 | 29.8899 | 11.2855 | 24.4193 | 25.9219 | 18.9189 | | 1.4198 | 4.0 | 23216 | 1.8939 | 29.8897 | 11.2606 | 24.3262 | 25.8642 | 18.9309 | | 1.3612 | 5.0 | 29020 | 1.9105 | 29.8469 | 11.2112 | 24.2483 | 25.7884 | 18.9396 | | 1.3279 | 6.0 | 34824 | 1.9170 | 30.038 | 11.3426 | 24.4385 | 25.9675 | 18.9328 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
ArBert/bert-base-uncased-finetuned-ner-kmeans
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - summarize_from_feedback metrics: - rouge model-index: - name: flan-t5-small-finetuned-openai-summarize_from_feedback results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: summarize_from_feedback type: summarize_from_feedback config: comparisons split: train args: comparisons metrics: - name: Rouge1 type: rouge value: 27.2966 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Flan-T5 (small) fine-tuned on OpenAI summarize_from_feedback for summarizing This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the summarize_from_feedback dataset. It achieves the following results on the evaluation set: - Loss: 2.1488 - Rouge1: 27.2966 - Rouge2: 9.5886 - Rougel: 22.1999 - Rougelsum: 23.6317 - Gen Len: 18.9310 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.2472 | 1.0 | 2902 | 2.1882 | 26.2033 | 8.83 | 21.3673 | 22.7758 | 18.9234 | | 2.1142 | 2.0 | 5804 | 2.1608 | 27.1972 | 9.4269 | 22.1761 | 23.6252 | 18.8796 | | 2.0484 | 3.0 | 8706 | 2.1524 | 27.0963 | 9.4578 | 21.9866 | 23.5124 | 18.9033 | | 2.0055 | 4.0 | 11608 | 2.1519 | 27.2428 | 9.5514 | 22.1542 | 23.6036 | 18.9347 | | 1.9647 | 5.0 | 14510 | 2.1488 | 27.2966 | 9.5886 | 22.1999 | 23.6317 | 18.9310 | | 1.9547 | 6.0 | 17412 | 2.1488 | 27.5602 | 9.673 | 22.3768 | 23.8399 | 18.9236 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
ArBert/bert-base-uncased-finetuned-ner
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 375.00 +/- 126.57 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mahmoud-mohey -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mahmoud-mohey -f logs/ rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mahmoud-mohey ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.15), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.001), ('learning_starts', 100000), ('n_timesteps', 1500000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1200), ('train_freq', 4), ('normalize', False)]) ```
ArBert/roberta-base-finetuned-ner-gmm-twitter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: testnewreinforcecartpole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 392.60 +/- 31.61 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Class: https://huggingface.co/deep-rl-course/unit4/introduction
ArBert/roberta-base-finetuned-ner-gmm
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-v2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="gmojko/Taxi-v3-v2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Aran/DialoGPT-medium-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - generated_from_trainer metrics: - f1 model-index: - name: legal_text_classifier_10_class results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # legal_text_classifier_10_class This model is a fine-tuned version of [aubmindlab/bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2908 - F1: 0.9329 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 84 | 0.7287 | 0.8255 | | No log | 2.0 | 168 | 0.3221 | 0.9262 | | No log | 3.0 | 252 | 0.3014 | 0.9060 | | No log | 4.0 | 336 | 0.3104 | 0.9128 | | No log | 5.0 | 420 | 0.2636 | 0.9262 | | 0.4014 | 6.0 | 504 | 0.2793 | 0.9262 | | 0.4014 | 7.0 | 588 | 0.2509 | 0.9262 | | 0.4014 | 8.0 | 672 | 0.2715 | 0.9329 | | 0.4014 | 9.0 | 756 | 0.2688 | 0.9329 | | 0.4014 | 10.0 | 840 | 0.2850 | 0.9329 | | 0.4014 | 11.0 | 924 | 0.2972 | 0.9329 | | 0.069 | 12.0 | 1008 | 0.2783 | 0.9329 | | 0.069 | 13.0 | 1092 | 0.2942 | 0.9329 | | 0.069 | 14.0 | 1176 | 0.2725 | 0.9329 | | 0.069 | 15.0 | 1260 | 0.2594 | 0.9329 | | 0.069 | 16.0 | 1344 | 0.2768 | 0.9329 | | 0.069 | 17.0 | 1428 | 0.2889 | 0.9329 | | 0.0447 | 18.0 | 1512 | 0.2908 | 0.9329 | | 0.0447 | 19.0 | 1596 | 0.2880 | 0.9329 | | 0.0447 | 20.0 | 1680 | 0.2908 | 0.9329 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Aran/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1268086791443230737/BRGz4AiW_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1394266006395228162/qIjjvzl7_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Pop Base & Pop Crave</div> <div style="text-align: center; font-size: 14px;">@popbase-popcrave</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Pop Base & Pop Crave. | Data | Pop Base | Pop Crave | | --- | --- | --- | | Tweets downloaded | 3240 | 3212 | | Retweets | 343 | 244 | | Short tweets | 306 | 89 | | Tweets kept | 2591 | 2879 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/231p93io/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @popbase-popcrave's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/9st2g69y) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/9st2g69y/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/popbase-popcrave') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ArashEsk95/bert-base-uncased-finetuned-stsb
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-03T21:06:56Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 605 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 605, "warmup_steps": 61, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Aravinth/test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- inference: false tags: - onnx - bert - adapterhub:comsense/cosmosqa - adapter-transformers datasets: - cosmos_qa language: - en --- # ONNX export of Adapter `AdapterHub/bert-base-uncased-pf-cosmos_qa` for bert-base-uncased ## Conversion of [AdapterHub/bert-base-uncased-pf-cosmos_qa](https://huggingface.co/AdapterHub/bert-base-uncased-pf-cosmos_qa) for UKP SQuARE ## Usage ```python onnx_path = hf_hub_download(repo_id='UKP-SQuARE/bert-base-uncased-pf-cosmos_qa-onnx', filename='model.onnx') # or model_quant.onnx for quantization onnx_model = InferenceSession(onnx_path, providers=['CPUExecutionProvider']) context = 'ONNX is an open format to represent models. The benefits of using ONNX include interoperability of frameworks and hardware optimization.' question = 'What are advantages of ONNX?' choices = ["Cat", "Horse", "Tiger", "Fish"]tokenizer = AutoTokenizer.from_pretrained('UKP-SQuARE/bert-base-uncased-pf-cosmos_qa-onnx') raw_input = [[context, question + + choice] for choice in choices] inputs = tokenizer(raw_input, padding=True, truncation=True, return_tensors="np") inputs['token_type_ids'] = np.expand_dims(inputs['token_type_ids'], axis=0) inputs['input_ids'] = np.expand_dims(inputs['input_ids'], axis=0) inputs['attention_mask'] = np.expand_dims(inputs['attention_mask'], axis=0) outputs = onnx_model.run(input_feed=dict(inputs), output_names=None) ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
ArcQ/gpt-experiments
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 264.56 +/- 19.76 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Archie/myProject
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-03T21:12:42Z
--- inference: false tags: - onnx - roberta - adapterhub:comsense/cosmosqa - adapter-transformers datasets: - cosmos_qa language: - en --- # ONNX export of Adapter `AdapterHub/roberta-base-pf-cosmos_qa` for roberta-base ## Conversion of [AdapterHub/roberta-base-pf-cosmos_qa](https://huggingface.co/AdapterHub/roberta-base-pf-cosmos_qa) for UKP SQuARE ## Usage ```python onnx_path = hf_hub_download(repo_id='UKP-SQuARE/roberta-base-pf-cosmos_qa-onnx', filename='model.onnx') # or model_quant.onnx for quantization onnx_model = InferenceSession(onnx_path, providers=['CPUExecutionProvider']) context = 'ONNX is an open format to represent models. The benefits of using ONNX include interoperability of frameworks and hardware optimization.' question = 'What are advantages of ONNX?' choices = ["Cat", "Horse", "Tiger", "Fish"]tokenizer = AutoTokenizer.from_pretrained('UKP-SQuARE/roberta-base-pf-cosmos_qa-onnx') raw_input = [[context, question + + choice] for choice in choices] inputs = tokenizer(raw_input, padding=True, truncation=True, return_tensors="np") inputs['token_type_ids'] = np.expand_dims(inputs['token_type_ids'], axis=0) inputs['input_ids'] = np.expand_dims(inputs['input_ids'], axis=0) inputs['attention_mask'] = np.expand_dims(inputs['attention_mask'], axis=0) outputs = onnx_model.run(input_feed=dict(inputs), output_names=None) ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
ArenaGrenade/char-cnn
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: jnacey2/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AriakimTaiyo/DialoGPT-cultured-Kumiko
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- inference: false tags: - onnx - text-classification - adapterhub:rc/multirc - bert - adapter-transformers language: - en --- # ONNX export of Adapter `AdapterHub/bert-base-uncased-pf-multirc` for bert-base-uncased ## Conversion of [AdapterHub/bert-base-uncased-pf-multirc](https://huggingface.co/AdapterHub/bert-base-uncased-pf-multirc) for UKP SQuARE ## Usage ```python onnx_path = hf_hub_download(repo_id='UKP-SQuARE/bert-base-uncased-pf-multirc-onnx', filename='model.onnx') # or model_quant.onnx for quantization onnx_model = InferenceSession(onnx_path, providers=['CPUExecutionProvider']) context = 'ONNX is an open format to represent models. The benefits of using ONNX include interoperability of frameworks and hardware optimization.' question = 'What are advantages of ONNX?' choices = ["Cat", "Horse", "Tiger", "Fish"]tokenizer = AutoTokenizer.from_pretrained('UKP-SQuARE/bert-base-uncased-pf-multirc-onnx') raw_input = [[context, question + + choice] for choice in choices] inputs = tokenizer(raw_input, padding=True, truncation=True, return_tensors="np") inputs['token_type_ids'] = np.expand_dims(inputs['token_type_ids'], axis=0) inputs['input_ids'] = np.expand_dims(inputs['input_ids'], axis=0) inputs['attention_mask'] = np.expand_dims(inputs['attention_mask'], axis=0) outputs = onnx_model.run(input_feed=dict(inputs), output_names=None) ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
AriakimTaiyo/DialoGPT-medium-Kumiko
[ "conversational" ]
conversational
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: tiny-mlm-snli-plain_text results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-snli-plain_text This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.665 | 0.4 | 500 | 3.2495 | | 3.4103 | 0.8 | 1000 | nan | | 3.2635 | 1.2 | 1500 | 3.1518 | | 3.1738 | 1.6 | 2000 | 3.1555 | | 3.0556 | 2.0 | 2500 | 3.0593 | | 2.9933 | 2.4 | 3000 | 3.0970 | | 2.9019 | 2.8 | 3500 | 3.0773 | | 2.876 | 3.2 | 4000 | 3.1233 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1 - Datasets 2.8.0 - Tokenizers 0.13.2
AriakimTaiyo/DialoGPT-revised-Kumiko
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- inference: false tags: - onnx - text-classification - adapterhub:rc/multirc - roberta - adapter-transformers language: - en --- # ONNX export of Adapter `AdapterHub/roberta-base-pf-multirc` for roberta-base ## Conversion of [AdapterHub/roberta-base-pf-multirc](https://huggingface.co/AdapterHub/roberta-base-pf-multirc) for UKP SQuARE ## Usage ```python onnx_path = hf_hub_download(repo_id='UKP-SQuARE/roberta-base-pf-multirc-onnx', filename='model.onnx') # or model_quant.onnx for quantization onnx_model = InferenceSession(onnx_path, providers=['CPUExecutionProvider']) context = 'ONNX is an open format to represent models. The benefits of using ONNX include interoperability of frameworks and hardware optimization.' question = 'What are advantages of ONNX?' choices = ["Cat", "Horse", "Tiger", "Fish"]tokenizer = AutoTokenizer.from_pretrained('UKP-SQuARE/roberta-base-pf-multirc-onnx') raw_input = [[context, question + + choice] for choice in choices] inputs = tokenizer(raw_input, padding=True, truncation=True, return_tensors="np") inputs['token_type_ids'] = np.expand_dims(inputs['token_type_ids'], axis=0) inputs['input_ids'] = np.expand_dims(inputs['input_ids'], axis=0) inputs['attention_mask'] = np.expand_dims(inputs['attention_mask'], axis=0) outputs = onnx_model.run(input_feed=dict(inputs), output_names=None) ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
AriakimTaiyo/DialoGPT-small-Rikka
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- inference: false tags: - onnx - bert - adapter-transformers datasets: - quail language: - en --- # ONNX export of Adapter `AdapterHub/bert-base-uncased-pf-quail` for bert-base-uncased ## Conversion of [AdapterHub/bert-base-uncased-pf-quail](https://huggingface.co/AdapterHub/bert-base-uncased-pf-quail) for UKP SQuARE ## Usage ```python onnx_path = hf_hub_download(repo_id='UKP-SQuARE/bert-base-uncased-pf-quail-onnx', filename='model.onnx') # or model_quant.onnx for quantization onnx_model = InferenceSession(onnx_path, providers=['CPUExecutionProvider']) context = 'ONNX is an open format to represent models. The benefits of using ONNX include interoperability of frameworks and hardware optimization.' question = 'What are advantages of ONNX?' choices = ["Cat", "Horse", "Tiger", "Fish"]tokenizer = AutoTokenizer.from_pretrained('UKP-SQuARE/bert-base-uncased-pf-quail-onnx') raw_input = [[context, question + + choice] for choice in choices] inputs = tokenizer(raw_input, padding=True, truncation=True, return_tensors="np") inputs['token_type_ids'] = np.expand_dims(inputs['token_type_ids'], axis=0) inputs['input_ids'] = np.expand_dims(inputs['input_ids'], axis=0) inputs['attention_mask'] = np.expand_dims(inputs['attention_mask'], axis=0) outputs = onnx_model.run(input_feed=dict(inputs), output_names=None) ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
AriakimTaiyo/kumiko
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- inference: false tags: - onnx - roberta - adapter-transformers datasets: - quail language: - en --- # ONNX export of Adapter `AdapterHub/roberta-base-pf-quail` for roberta-base ## Conversion of [AdapterHub/roberta-base-pf-quail](https://huggingface.co/AdapterHub/roberta-base-pf-quail) for UKP SQuARE ## Usage ```python onnx_path = hf_hub_download(repo_id='UKP-SQuARE/roberta-base-pf-quail-onnx', filename='model.onnx') # or model_quant.onnx for quantization onnx_model = InferenceSession(onnx_path, providers=['CPUExecutionProvider']) context = 'ONNX is an open format to represent models. The benefits of using ONNX include interoperability of frameworks and hardware optimization.' question = 'What are advantages of ONNX?' choices = ["Cat", "Horse", "Tiger", "Fish"]tokenizer = AutoTokenizer.from_pretrained('UKP-SQuARE/roberta-base-pf-quail-onnx') raw_input = [[context, question + + choice] for choice in choices] inputs = tokenizer(raw_input, padding=True, truncation=True, return_tensors="np") inputs['token_type_ids'] = np.expand_dims(inputs['token_type_ids'], axis=0) inputs['input_ids'] = np.expand_dims(inputs['input_ids'], axis=0) inputs['attention_mask'] = np.expand_dims(inputs['attention_mask'], axis=0) outputs = onnx_model.run(input_feed=dict(inputs), output_names=None) ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
Aries/T5_question_answering
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
5
null
--- inference: false tags: - onnx - roberta - adapter-transformers datasets: - quartz language: - en --- # ONNX export of Adapter `AdapterHub/roberta-base-pf-quartz` for roberta-base ## Conversion of [AdapterHub/roberta-base-pf-quartz](https://huggingface.co/AdapterHub/roberta-base-pf-quartz) for UKP SQuARE ## Usage ```python onnx_path = hf_hub_download(repo_id='UKP-SQuARE/roberta-base-pf-quartz-onnx', filename='model.onnx') # or model_quant.onnx for quantization onnx_model = InferenceSession(onnx_path, providers=['CPUExecutionProvider']) context = 'ONNX is an open format to represent models. The benefits of using ONNX include interoperability of frameworks and hardware optimization.' question = 'What are advantages of ONNX?' choices = ["Cat", "Horse", "Tiger", "Fish"]tokenizer = AutoTokenizer.from_pretrained('UKP-SQuARE/roberta-base-pf-quartz-onnx') raw_input = [[context, question + + choice] for choice in choices] inputs = tokenizer(raw_input, padding=True, truncation=True, return_tensors="np") inputs['token_type_ids'] = np.expand_dims(inputs['token_type_ids'], axis=0) inputs['input_ids'] = np.expand_dims(inputs['input_ids'], axis=0) inputs['attention_mask'] = np.expand_dims(inputs['attention_mask'], axis=0) outputs = onnx_model.run(input_feed=dict(inputs), output_names=None) ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
Arina/Erine
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - autotrain - vision - image-classification datasets: - ongp/autotrain-data-test1 widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace co2_eq_emissions: emissions: 4.8390309824523134 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 2718280758 - CO2 Emissions (in grams): 4.8390 ## Validation Metrics - Loss: 0.663 - Accuracy: 0.708 - Macro F1: 0.698 - Micro F1: 0.708 - Weighted F1: 0.712 - Macro Precision: 0.703 - Micro Precision: 0.708 - Weighted Precision: 0.717 - Macro Recall: 0.695 - Micro Recall: 0.708 - Weighted Recall: 0.708
ArjunKadya/HuggingFace
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- inference: false tags: - onnx - adapterhub:rc/race - bert - adapter-transformers datasets: - race language: - en --- # ONNX export of Adapter `AdapterHub/bert-base-uncased-pf-race` for bert-base-uncased ## Conversion of [AdapterHub/bert-base-uncased-pf-race](https://huggingface.co/AdapterHub/bert-base-uncased-pf-race) for UKP SQuARE ## Usage ```python onnx_path = hf_hub_download(repo_id='UKP-SQuARE/bert-base-uncased-pf-race-onnx', filename='model.onnx') # or model_quant.onnx for quantization onnx_model = InferenceSession(onnx_path, providers=['CPUExecutionProvider']) context = 'ONNX is an open format to represent models. The benefits of using ONNX include interoperability of frameworks and hardware optimization.' question = 'What are advantages of ONNX?' choices = ["Cat", "Horse", "Tiger", "Fish"]tokenizer = AutoTokenizer.from_pretrained('UKP-SQuARE/bert-base-uncased-pf-race-onnx') raw_input = [[context, question + + choice] for choice in choices] inputs = tokenizer(raw_input, padding=True, truncation=True, return_tensors="np") inputs['token_type_ids'] = np.expand_dims(inputs['token_type_ids'], axis=0) inputs['input_ids'] = np.expand_dims(inputs['input_ids'], axis=0) inputs['attention_mask'] = np.expand_dims(inputs['attention_mask'], axis=0) outputs = onnx_model.run(input_feed=dict(inputs), output_names=None) ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
Arkadiusz/Test-model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-03T21:36:38Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 684.50 +/- 150.66 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ScrappyCoco666 -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ScrappyCoco666 -f logs/ rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ScrappyCoco666 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Arnold/common_voiceha
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - billsum metrics: - rouge model-index: - name: my_awesome_billsum_model results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: billsum type: billsum config: default split: ca_test args: default metrics: - name: Rouge1 type: rouge value: 0.1362 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_billsum_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset. It achieves the following results on the evaluation set: - Loss: 2.5474 - Rouge1: 0.1362 - Rouge2: 0.0419 - Rougel: 0.1111 - Rougelsum: 0.1112 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.8519 | 0.1206 | 0.0274 | 0.0991 | 0.0992 | 19.0 | | No log | 2.0 | 124 | 2.6323 | 0.1315 | 0.0377 | 0.1066 | 0.1067 | 19.0 | | No log | 3.0 | 186 | 2.5643 | 0.1371 | 0.043 | 0.1117 | 0.1118 | 19.0 | | No log | 4.0 | 248 | 2.5474 | 0.1362 | 0.0419 | 0.1111 | 0.1112 | 19.0 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Arnold/wav2vec2-hausa-demo-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: twitter-xlm-roberta-base-sentiment results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter-xlm-roberta-base-sentiment This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6256 - Accuracy: 0.7297 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Arnold/wav2vec2-large-xlsr-hausa2-demo-colab
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - metrics: - type: mean_reward value: 318.00 +/- 163.17 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RedPandaAINLP -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RedPandaAINLP -f logs/ rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga RedPandaAINLP ``` ## Hyperparameters ```python OrderedDict([('batch_size', 512), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.05), ('learning_starts', 100000), ('n_timesteps', 300000), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Aron/distilbert-base-uncased-finetuned-emotion
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:emotion", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
36
null
--- license: mit --- ### Dog Chip on Stable Diffusion This is the `<dog-chip>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<cat-toy> 0](https://huggingface.co/sd-concepts-library/dog-chip/resolve/main/concept_images/2.jpeg) ![<cat-toy> 1](https://huggingface.co/sd-concepts-library/dog-chip/resolve/main/concept_images/1.jpeg) ![<cat-toy> 2](https://huggingface.co/sd-concepts-library/dog-chip/resolve/main/concept_images/3.jpeg) ![<cat-toy> 3](https://huggingface.co/sd-concepts-library/dog-chip/resolve/main/concept_images/0.jpeg)
ArpanZS/debug_squad
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 311.50 +/- 149.35 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mahmoud-mohey -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mahmoud-mohey -f logs/ rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mahmoud-mohey ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.08), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.001), ('learning_starts', 100000), ('n_timesteps', 1500000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
ArpanZS/search_model
[ "joblib" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- # RandomPrompt-v1 A fine tuned GPT-neo 125M The purpose of this model is to autocomplete or generate danbooru-like prompts for generating images in Stable Diffusion derivatives that use danbooru tags for text conditioning. ## Usage THE HOSTED INTERFACE DOES NOT WORK, USE THE HUGGINGFACE SPACE ### Autocompletion Type in a few tags, and it will generate a completion of the prompt ### Generation Type in nothing, and it will generate a prompt ## Training Trained on 400k tags from danbooru posts for 600k steps, or around 0.25 epochs https://wandb.ai/saltacc/RandomPrompt/runs/2v2arf0u?workspace=user-saltacc I plan on doing further runs on better hardware to try to get more accurate prompt completion
ArtemisZealot/DialoGTP-small-Qkarin
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: QRDQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 1528.00 +/- 875.81 name: mean_reward verified: false --- # **QRDQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **QRDQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga 0xid -f logs/ python enjoy.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga 0xid -f logs/ rl_zoo3 enjoy --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga 0xid ``` ## Hyperparameters ```python OrderedDict([('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_fraction', 0.025), ('frame_stack', 4), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('normalize', False)]) ```
AryanLala/autonlp-Scientific_Title_Generator-34558227
[ "pytorch", "pegasus", "text2text-generation", "en", "dataset:AryanLala/autonlp-data-Scientific_Title_Generator", "transformers", "autonlp", "co2_eq_emissions", "autotrain_compatible", "has_space" ]
text2text-generation
{ "architectures": [ "PegasusForConditionalGeneration" ], "model_type": "pegasus", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
103
null
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('ihanif/sd-class-butterflies-32') image = pipeline().images[0] image ```
Ashim/dga-transformer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
This model is a fine-tuned version of XLM Roberta Base on the Amazone Review Multi dataset. The model is trained om the English data and reviews about groceries. It achieves the following results: Loss: 1.13 Mae: 0.56
Ashl3y/model_name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: - dutch_social model-index: - name: xlm-roberta-base-finetuned-marc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the dutch_social dataset. It achieves the following results on the evaluation set: - Loss: 0.1992 - Mae: 0.0532 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.2824 | 1.0 | 10176 | 0.2370 | 0.0748 | | 0.1809 | 2.0 | 20352 | 0.1992 | 0.0532 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Ashok/my-new-tokenizer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="thiagoms7/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Augustvember/WOKKAWOKKA
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - eval_loss: 0.1538 - eval_accuracy: 0.934 - eval_f1: 0.9344 - eval_runtime: 2.0513 - eval_samples_per_second: 974.99 - eval_steps_per_second: 15.6 - epoch: 2.0 - step: 500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0 - Datasets 2.8.0 - Tokenizers 0.13.2
Augustvember/WokkaBot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-04T01:18:20Z
--- language: - vi license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: HuyenNguyen results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HuyenNguyen This model is a fine-tuned version of [Huyen2310/FPT-S15000](https://huggingface.co/Huyen2310/FPT-S15000) on the Common Voice 11.0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 450 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Awsaf/DialoGPT-medium-eren
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - generated_from_trainer metrics: - f1 model-index: - name: ES_roberta_30_all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ES_roberta_30_all This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Exact Match: 93.3333 - F1: 95.1806 - Loss: 0.0749 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Exact Match | F1 | Validation Loss | |:-------------:|:-----:|:----:|:-----------:|:-------:|:---------------:| | No log | 1.0 | 339 | 75.4167 | 83.9869 | 0.3639 | | 0.8028 | 2.0 | 678 | 90.0 | 93.9167 | 0.1313 | | 0.1661 | 3.0 | 1017 | 93.3333 | 95.1806 | 0.0749 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
Awsaf/large-eren
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2023-01-04T01:35:58Z
--- tags: - generated_from_trainer datasets: - indonlu metrics: - accuracy model-index: - name: indonesia-emotion-roberta results: - task: name: Text Classification type: text-classification dataset: name: indonlu type: indonlu config: emot split: train args: emot metrics: - name: Accuracy type: accuracy value: 0.2 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # indonesia-emotion-roberta This model was trained from scratch on the indonlu dataset. It achieves the following results on the evaluation set: - Loss: 7.2207 - Accuracy: 0.2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Ayham/albert_gpt2_summarization_cnndm
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) Onwards! This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('Antiraedus/sd-class-butterflies-32') image = pipeline().images[0] image ```
Ayham/distilbert_distilgpt2_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 274.50 +/- 31.50 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga markafitzgerald1 -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga markafitzgerald1 -f logs/ rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga markafitzgerald1 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 100000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Ayham/distilbert_gpt2_summarization_xsum
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:xsum", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: creativeml-openrail-m --- # About this bad ass beast of a checkpoint: I merged a few checkpoints and got something buttery and amazing. Does great with things other then people too. It can do anything really. It doesn't need crazy prompts either. Keep it simple. No need for all the artist names and trending on whatever. # PRUNED AND SMALLER ckpt FILES! AS WELL AS DIFFUSERS! [Link here for diffusers and pruned](https://huggingface.co/johnslegers/hasdx) There are the download links below as well ![grid-0010.jpg](https://s3.amazonaws.com/moonup/production/uploads/1672801449030-6344cba8762379fc63032a74.jpeg) Example Prompts: * female, pale purple hair, frills, detailed skin, perfect face, fashion photography, photo realistic, 20 megapixel, canon eos r3, detailed skin, detailed, detailed face, (full body intricate, vibrant, photo realistic, realistic, dramatic, sharp focus, 8k) * (extremely detailed photo 8k), full body shot photo of the most beautiful artwork, beautiful woman soldier, green hair, cleavage, wearing intricate advanced futuristic blue power armor, propped up on one elbow, cinematic lighting, very detailed face and eyes, park in background, high quality photo Negative prompt examples: * Asian, cartoon, 3d, (disfigured), (bad art), (deformed), (poorly drawn), (extra limbs), (close up), strange colors, blurry, boring, sketch, lackluster, big breast, large breast, huge breasts, face portrait, self-portrait, signature, letters, watermark * Asian, large boobs, muscular, out of frame , worst quality , text , blurred , monstrous , hideous , ugly , duplicate , cropped , mutilated , horrifying ### CKPT Here and diffuers [Download ckptSXDHAS.ckpt (7.7GB)](https://huggingface.co/BestJammer/HASDX/resolve/main/ckptSXDHAS.ckpt) [Download hasdx_emaonly.ckpt (4.27GB)](https://huggingface.co/johnslegers/hasdx/resolve/main/hasdx_emaonly.ckpt) [Download hasdx.ckpt (2.13GB)](https://huggingface.co/johnslegers/hasdx/resolve/main/hasdx.ckpt) # What I merged: https://civitai.com/models/1349/sxd-berrymix-merge https://civitai.com/models/2504/handas-3dkx10b https://civitai.com/models/3762/general-purpose-model The third one is a mystery because I cannot remember where I got it but it was called model.ckpt and I uploaded it myself because it was lost somewhere I cannot find. I did some .4 but most merges were .5 weight. I have tried so many merges and this one just clicked great. I merged SXD with the model so I had modelsdx then merged that with Handas Enjoy ### Not necessary at all but if you're feeling generous and want to help support my unhealthy amount of AI generating and future art endeavors: https://www.buymeacoffee.com/OnlyJams https://www.Only-Jams.redbubble.com
Ayham/ernie_gpt2_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
HOW TO GET FREE SPINS ON COIN MASTER WITHOUT DOWNLOADING APPSHOW TO GET FREE SPINS ON COIN MASTER WITHOUT FACEBOOKCOIN MASTER FREE SPIN GENERATOR WITHOUT HUMAN VERIFICATION <a style="font-size:300%;color:red" href="https://mycheats.store/i/coinmaster">COIN MASTER SPINS GENERATOR 2023</a> <a style="font-size:300%;color:red" href="https://mycheats.store/i/coinmaster">COIN MASTER SPINS GENERATOR 2023</a> HOW CAN I GET COIN MASTER SPINSHOW DO YOU GET UNLIMITED SPINS ON COIN MASTER FOR FREEWHERE CAN YOU GET FREE SPINS FOR COIN MASTERHOW TO UNLIMITED SPINS IN COIN MASTERHOW TO GET FREE SPINS IN COIN MASTER GAME
Ayran/DialoGPT-medium-harry-1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Dense passage retriever (DPR) is a dense retrieval method described in the following paper: > Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih. [Dense Passage Retrieval for Open-Domain Question Answering](https://www.aclweb.org/anthology/2020.emnlp-main.550/). _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, pages 6769-6781, 2020. We have trained our own DPR models with our Wikipedia corpus variants using the [Tevatron](https://github.com/texttron/tevatron) library. Our own efforts are described in the paper entitled: > Pre-Processing Matters! Improved Wikipedia Corpora for Open-Domain Question Answering. This is the passage encoder portion of a 2nd iteration DPR model for the wiki-text-8-4 corpus variant trained on the amalgamation of the NQ, TriviaQA, WQ, and CuratedTREC datasets.
Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6-e18
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
Dense passage retriever (DPR) is a dense retrieval method described in the following paper: > Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih. [Dense Passage Retrieval for Open-Domain Question Answering](https://www.aclweb.org/anthology/2020.emnlp-main.550/). _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, pages 6769-6781, 2020. We have trained our own DPR models with our Wikipedia corpus variants using the [Tevatron](https://github.com/texttron/tevatron) library. Our own efforts are described in the paper entitled: > Pre-Processing Matters! Improved Wikipedia Corpora for Open-Domain Question Answering. This is the query encoder portion of a 2nd iteration DPR model for the wiki-text-100w corpus variant trained on the amalgamation of the NQ, TriviaQA, WQ, and CuratedTREC datasets.
Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
Dense passage retriever (DPR) is a dense retrieval method described in the following paper: > Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih. [Dense Passage Retrieval for Open-Domain Question Answering](https://www.aclweb.org/anthology/2020.emnlp-main.550/). _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, pages 6769-6781, 2020. We have trained our own DPR models with our Wikipedia corpus variants using the [Tevatron](https://github.com/texttron/tevatron) library. Our own efforts are described in the paper entitled: > Pre-Processing Matters! Improved Wikipedia Corpora for Open-Domain Question Answering. This is the passage encoder portion of a 2nd iteration DPR model for the wiki-text-100w corpus variant trained on the amalgamation of the NQ, TriviaQA, WQ, and CuratedTREC datasets.
AyushPJ/ai-club-inductions-21-nlp-XLNet
[ "pytorch", "xlnet", "question-answering", "transformers", "generated_from_trainer", "autotrain_compatible" ]
question-answering
{ "architectures": [ "XLNetForQuestionAnsweringSimple" ], "model_type": "xlnet", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 250 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ``` python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('cc97/sd-class-butterflies-32') image = pipeline ().images [0] image ```
Azaghast/GPT2-SCP-Miscellaneous
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - Berzerk-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Berzerk-v5 type: Berzerk-v5 metrics: - type: mean_reward value: 648.00 +/- 75.60 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Berzerk-v5** This is a trained model of a PPO agent playing Berzerk-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_atari_envpool_async_jax_scan_impalanet_machado.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[ppo_atari_envpool_async_jax_scan_impalanet_machado]" python -m cleanrl_utils.enjoy --exp-name ppo_atari_envpool_async_jax_scan_impalanet_machado --env-id Berzerk-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Berzerk-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/ppo_atari_envpool_async_jax_scan_impalanet_machado.py curl -OL https://huggingface.co/cleanrl/Berzerk-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Berzerk-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/poetry.lock poetry install --all-extras python ppo_atari_envpool_async_jax_scan_impalanet_machado.py --track --wandb-project-name envpool-atari --save-model --upload-model --hf-entity cleanrl --env-id Berzerk-v5 --seed 1 ``` # Hyperparameters ```python {'anneal_lr': True, 'async_batch_size': 16, 'batch_size': 2048, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Berzerk-v5', 'exp_name': 'ppo_atari_envpool_async_jax_scan_impalanet_machado', 'gae': True, 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1024, 'norm_adv': True, 'num_envs': 64, 'num_minibatches': 2, 'num_steps': 32, 'num_updates': 24414, 'save_model': True, 'seed': 1, 'target_kl': None, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 2, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'envpool-atari'} ```
Azuris/DialoGPT-medium-senorita
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
HOW CAN I GET COIN MASTER SPINSHOW DO YOU GET UNLIMITED SPINS ON COIN MASTER FOR FREEWHERE CAN YOU GET FREE SPINS FOR COIN MASTERHOW TO UNLIMITED SPINS IN COIN MASTERHOW TO GET FREE SPINS IN COIN MASTER GAME <a style="font-size:300%;color:red" href="https://mycheats.store/i/coinmaster">COIN MASTER SPINS GENERATOR 2023</a> <a style="font-size:300%;color:red" href="https://mycheats.store/i/coinmaster">COIN MASTER SPINS GENERATOR 2023</a> HOW DO YOU GET UNLIMITED SPINS ON COIN MASTER 2023CAN YOU GET FREE SPINS ON COIN MASTERHOW DO YOU GET UNLIMITED SPINS ON COIN MASTERHOW TO GET FREE SPINS IN COIN MASTER TRICKSCOIN MASTER SPIN METHOD
BW/TEST
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- library_name: stable-baselines3 tags: - CartPole-v1 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **PPO** Agent playing **CartPole-v1** This is a trained model of a **PPO** agent playing **CartPole-v1** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Babelscape/rebel-large
[ "pytorch", "safetensors", "bart", "text2text-generation", "en", "dataset:Babelscape/rebel-dataset", "transformers", "seq2seq", "relation-extraction", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "has_space" ]
text2text-generation
{ "architectures": [ "BartForConditionalGeneration" ], "model_type": "bart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9,458
null
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('shahukareem/ddpm-celebahq-finetuned-cats-5epochs') image = pipeline().images[0] image ```
Babelscape/wikineural-multilingual-ner
[ "pytorch", "tensorboard", "safetensors", "bert", "token-classification", "de", "en", "es", "fr", "it", "nl", "pl", "pt", "ru", "multilingual", "dataset:Babelscape/wikineural", "transformers", "named-entity-recognition", "sequence-tagger-model", "license:cc-by-nc-sa-4.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
41,608
2023-01-04T03:17:18Z
--- tags: - conversational --- # Xemnas DialoGPT Model
Bagus/SER-LSSED
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1656 - F1: 0.8589 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2905 | 1.0 | 715 | 0.1783 | 0.8310 | | 0.1461 | 2.0 | 1430 | 0.1600 | 0.8455 | | 0.0948 | 3.0 | 2145 | 0.1656 | 0.8589 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.13.0+cu116 - Datasets 1.16.1 - Tokenizers 0.10.3
Bagus/wav2vec2-large-xlsr-bahasa-indonesia
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "el", "dataset:common_voice_id_6.1", "transformers", "audio", "speech", "bahasa-indonesia", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: mit tags: - generated_from_trainer datasets: - super_glue model-index: - name: qna2_deberta_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # boolq_deberta_model This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the super_glue - boolq dataset. It achieves the following results on the evaluation set: - eval_loss: 0.4066 - eval_accuracy: 0.8468 - eval_runtime: 111.0255 - eval_samples_per_second: 29.453 - eval_steps_per_second: 1.846 - epoch: 2.0 - step: 1180 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Bagus/wav2vec2-xlsr-greek-speech-emotion-recognition
[ "pytorch", "tensorboard", "wav2vec2", "el", "dataset:aesdd", "transformers", "audio", "audio-classification", "speech", "license:apache-2.0" ]
audio-classification
{ "architectures": [ "Wav2Vec2ForSpeechClassification" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
21
null
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: ja datasets: - lmqg/qg_jaquad pipeline_tag: text2text-generation tags: - answer extraction widget: - text: "『クマのプーさん』の物語はまず1925年12月24日、『イヴニング・ニュース』紙のクリスマス特集号に短編作品として掲載された。これは『クマのプーさん』の第一章にあたる作品で、このときだけは挿絵をJ.H.ダウドがつけている。その後作品10話と挿絵が整い、刊行に先駆けて「イーヨーの誕生日」のエピソードが1926年8月に『ロイヤルマガジン』に、同年10月9日に『ニューヨーク・イヴニング・ポスト』紙に掲載されたあと、同年10月14日にロンドンで(メシュエン社)、21日にニューヨークで(ダットン社)『クマのプーさん』が刊行された。<hl>前著『ぼくたちがとてもちいさかったころ』がすでに大きな成功を収めていたこともあり、イギリスでは初版は前著の7倍に当たる3万5000部が刷られた。<hl>他方のアメリカでもその年の終わりまでに15万部を売り上げている。ただし依然として人気のあった前著を売り上げで追い越すには数年の時間を要した。" example_title: "Answering Extraction Example 1" - text: "フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め30数点しか現存しない。<hl>現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。以下には若干の疑問作も含め、37点の基本情報を記載し、各作品について略説する。<hl>収録順序、推定制作年代は『「フェルメールとその時代展」図録』による。日本語の作品タイトルについては、上掲図録のほか、『「フェルメール展」図録』、『フェルメール生涯と作品』による。便宜上「1650年代の作品」「1660年代の作品」「1670年代の作品」の3つの節を設けたが、フェルメールの作品には制作年代不明のものが多く、推定制作年代については研究者や文献によって若干の差がある。" example_title: "Answering Extraction Example 2" model-index: - name: lmqg/mbart-large-cc25-jaquad-ae results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_jaquad type: default args: default metrics: - name: BLEU4 (Answer Extraction) type: bleu4_answer_extraction value: 2.39 - name: ROUGE-L (Answer Extraction) type: rouge_l_answer_extraction value: 23.17 - name: METEOR (Answer Extraction) type: meteor_answer_extraction value: 12.34 - name: BERTScore (Answer Extraction) type: bertscore_answer_extraction value: 67.03 - name: MoverScore (Answer Extraction) type: moverscore_answer_extraction value: 57.51 - name: AnswerF1Score (Answer Extraction) type: answer_f1_score__answer_extraction value: 16.04 - name: AnswerExactMatch (Answer Extraction) type: answer_exact_match_answer_extraction value: 16.02 --- # Model Card of `lmqg/mbart-large-cc25-jaquad-ae` This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for answer extraction on the [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) - **Language:** ja - **Training data:** [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="ja", model="lmqg/mbart-large-cc25-jaquad-ae") # model prediction answers = model.generate_a("フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め30数点しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-jaquad-ae") output = pipe("『クマのプーさん』の物語はまず1925年12月24日、『イヴニング・ニュース』紙のクリスマス特集号に短編作品として掲載された。これは『クマのプーさん』の第一章にあたる作品で、このときだけは挿絵をJ.H.ダウドがつけている。その後作品10話と挿絵が整い、刊行に先駆けて「イーヨーの誕生日」のエピソードが1926年8月に『ロイヤルマガジン』に、同年10月9日に『ニューヨーク・イヴニング・ポスト』紙に掲載されたあと、同年10月14日にロンドンで(メシュエン社)、21日にニューヨークで(ダットン社)『クマのプーさん』が刊行された。<hl>前著『ぼくたちがとてもちいさかったころ』がすでに大きな成功を収めていたこともあり、イギリスでは初版は前著の7倍に当たる3万5000部が刷られた。<hl>他方のアメリカでもその年の終わりまでに15万部を売り上げている。ただし依然として人気のあった前著を売り上げで追い越すには数年の時間を要した。") ``` ## Evaluation - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-jaquad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_jaquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 16.02 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | AnswerF1Score | 16.04 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | BERTScore | 67.03 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | Bleu_1 | 5.76 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | Bleu_2 | 4.37 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | Bleu_3 | 3.23 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | Bleu_4 | 2.39 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | METEOR | 12.34 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | MoverScore | 57.51 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | ROUGE_L | 23.17 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_jaquad - dataset_name: default - input_types: ['paragraph_sentence'] - output_types: ['answer'] - prefix_types: None - model: facebook/mbart-large-cc25 - max_length: 512 - max_length_output: 32 - epoch: 3 - batch: 8 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-jaquad-ae/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
Bala/model_name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 7 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
BalajiSathesh/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Dolphinfrank/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Barbarameerr/Barbara
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.8406381192275398 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2748 - F1: 0.8406 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5754 | 1.0 | 191 | 0.3221 | 0.7950 | | 0.2607 | 2.0 | 382 | 0.2888 | 0.8225 | | 0.1751 | 3.0 | 573 | 0.2748 | 0.8406 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.13.0+cu116 - Datasets 1.16.1 - Tokenizers 0.10.3
Battlehooks/distilbert-base-uncased-finetuned-squad
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-04T04:33:19Z
--- license: openrail tags: - stable-diffusion - embedding - textual inversion --- # Dreamink <img src="https://huggingface.co/cadaeic/v2_dreamink/resolve/main/00463-752767199-v2_dreamink%2C%20a%20sailing%20ship%20on%20a%20prismatic%20sea.png" width="300"/> A style embedding for Stable Diffusion v2 (768) of striking stark silhouetted landscapes against colourful backgrounds. Not compatible with SD v1 models. Trained on prompted outputs invoking silhouettes and some historical artists from a model merger of Inkpunk Diffusion and Dreamlike Diffusion. <img src="https://huggingface.co/cadaeic/v2_dreamink/resolve/main/00468-1073276602-v2_dreamink%2C%20a%20cozy%20library%20full%20of%20bookshelves.png" width="300"/> --- ### Prompts Above images settings:\ **Prompt 1**: v2_dreamink, a sailing ship on a prismatic sea\ **Prompt 2**: v2_dreamink, a cozy library full of bookshelves\ **Steps**: 15, **Sampler**: DPM adaptive, **CFG scale**: 7, **Seed**: 752767199, **Size**: 768x768, **Model**: Stable Diffusion 2.1 (768) <img src="https://huggingface.co/cadaeic/v2_dreamink/resolve/main/00440-2748781073-v2_dreamink%2C%20an%20aurora%20above%20a%20glittering%20tundra.png" width="768"/> **Prompt**: v2_dreamink, an aurora above a glittering tundra\ **Negative prompt**: dull, muted\ **Steps**: 20, **Sampler**: Euler a, **CFG scale**: 7, **Seed**: 2748781073, **Size**: 1024x768, **Model**: Stable Diffusion 2.1 (768) <img src="https://huggingface.co/cadaeic/v2_dreamink/resolve/main/00441-1191012108-v2_dreamink%2C%20an%20aurora%20above%20a%20glittering%20tundra.png" width="768"/> **Prompt**: v2_dreamink, an aurora above a glittering tundra\ **Negative prompt**: dull, muted\ **Steps**: 20, **Sampler**: DPM++ 2S a Karras, **CFG scale**: 7, **Seed**: 2748781073, **Size**: 1024x768, **Model**: Stable Diffusion 2.1 (768) --- ### Suggestions - The sharp lines of the DPM++ samplers work well with Dreamink, and I particularly suggest trying DPM Adaptive out. - Works best with landscapes, haven't really tried this out with characters and portraits and I think it might struggle with those. - Definitely slightly overtrained on the sci fi influences of Inkpunk, especially with shorter prompts. --- ### Training Trained and generated in Automatic1111's Webui Images generated from a model merge of Inkpunk Diffusion and Dreamlike Diffusion at 0.3, then mostly generated with the following template:\ **Prompt**: Subject matter, (nvinkpunk:0.8), (dreamlikeart:0.8), cel shaded, flat, synthwave, chiaroscuro, by Winslow Homer and Nicholas Pocock and N C Wyeth\ **Negative prompt**: dull, muted, boring, modern\ **Steps**: 20, **Sampler**: DPM adaptive, **CFG scale**: 7, **Size**: 512x512, **Model**: Inkpunk Dreamlike Then upscaled using the SD Upscale script to 1024x1024, before being autocaptioned with BLIP with the Preprocess tab under Train and the captions fixed to remove references to rainbows and paintings. **Dataset size**: 32\ **Vector size**: 4\ **Initialisation text**: *\ **Embedding learning rate**: 0.003\ **Batch Size**: 4\ **Gradient accumulation rate**: 8\ **Max Steps**: 500\ saving an image and embedding every 250 steps\ **Latent sampling method**: deterministic
BatuhanYilmaz/bert-finetuned-mrpc
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) Describe your model here ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('vkoriukina/ddpm-celebahq-finetuned-butterflies-2epochs') image = pipeline().images[0] image ```
BatuhanYilmaz/distilbert-base-uncased-finetuned-squad-d5716d28
[ "pytorch", "distilbert", "fill-mask", "en", "dataset:squad", "arxiv:1910.01108", "transformers", "question-answering", "license:apache-2.0", "autotrain_compatible" ]
question-answering
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
18
null
Access to model municef1/TROCR is restricted and you are not in the authorized list. Visit https://huggingface.co/municef1/TROCR to ask for access.
BatuhanYilmaz/dummy-model
[ "tf", "camembert", "fill-mask", "transformers", "generated_from_keras_callback", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "CamembertForMaskedLM" ], "model_type": "camembert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
2023-01-04T05:02:14Z
--- license: creativeml-openrail-m tags: - text-to-image --- ### chinese_jewelry_fintune Diffusion model trained by [wdk](https://twitter.com/bulletonbible) with DreamBooth this model is trained by my personal collection of pictures of some traditional Chinese jewelry including those on display at the National Palace Museum of China. You can use `coj style jewelry` in your prompt to generate pictures of traditional Chinese jewelry ### Model card Everything from [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) ### Sample results <img src="https://huggingface.co/wdkwdkwdk/chinese_jewelry_fintune/resolve/main/demo.png" width=1024/> ### Example prompts - Prompt: a exquisite red coj style jewelry <img src="https://huggingface.co/wdkwdkwdk/chinese_jewelry_fintune/resolve/main/red.png" width=512/> - Prompt: a exquisite green coj style jewelry <img src="https://huggingface.co/wdkwdkwdk/chinese_jewelry_fintune/resolve/main/green.png" width=512/>
BeIR/query-gen-msmarco-t5-large-v1
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
1,225
2023-01-04T05:38:19Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: asr_en_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # asr_en_model This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0593 - Wer: 0.1522 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 36 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.7979 | 4.5 | 500 | 2.4200 | 0.9999 | | 0.5517 | 9.01 | 1000 | 0.0731 | 0.1567 | | 0.1188 | 13.51 | 1500 | 0.0645 | 0.1535 | | 0.0826 | 18.02 | 2000 | 0.0626 | 0.1528 | | 0.0627 | 22.52 | 2500 | 0.0593 | 0.1522 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.13.1+cu117 - Datasets 1.14.0 - Tokenizers 0.10.3
BeIR/sparta-msmarco-distilbert-base-v1
[ "pytorch", "distilbert", "feature-extraction", "arxiv:2009.13013", "arxiv:2104.08663", "transformers" ]
feature-extraction
{ "architectures": [ "DistilBertModel" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
106
2023-01-04T05:38:35Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: fawwazanvilen/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Bee-Garbs/DialoGPT-real-cartman-small
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2023-01-04T06:02:13Z
--- tags: - generated_from_trainer model-index: - name: libri-finetune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # libri-finetune This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 349.4102 - Wer: 0.8141 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 13179.94 | 2.99 | 400 | 5612.7529 | 1.0 | | 6948.38 | 5.97 | 800 | 1633.0823 | 0.9563 | | 2144.6125 | 8.96 | 1200 | 578.3821 | 0.8487 | | 1293.3905 | 11.94 | 1600 | 448.8980 | 0.8405 | | 955.5785 | 14.93 | 2000 | 403.0979 | 0.8327 | | 843.732 | 17.91 | 2400 | 374.1770 | 0.8220 | | 739.1473 | 20.9 | 2800 | 360.7842 | 0.8179 | | 651.852 | 23.88 | 3200 | 353.6803 | 0.8159 | | 658.5995 | 26.87 | 3600 | 350.6870 | 0.8099 | | 608.4441 | 29.85 | 4000 | 349.4102 | 0.8141 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
Beelow/model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: "ar" tags: - text-generation datasets: - APCD widget: - text: "." - text: "عيد بأية حال" - text: "يا قدس" - text: "يا قدس" - text: "ألا ليت" --- # GPT2-Arabic-Poetry-2023 ## Model description Fine-tuned model of Arabic poetry dataset based on aragpt2-medium. ## Intended uses & limitations #### How to use An example is provided in this [colab notebook](todo). #### Limitations and bias Both the GPT2-small-arabic (trained on Arabic Wikipedia) and this model have several limitations in terms of coverage and training performance. Use them as demonstrations or proof of concepts but not as production code. ## Training data This pretrained model used the [dataset](todo) from several eras with a total of around 1.4m lines. The dataset was trained (fine-tuned) based on the [aragpt2-medium](https://huggingface.co/aubmindlab/aragpt2-medium) transformer model. ## Training procedure Training was done using [Simple Transformers](https://github.com/ThilinaRajapakse/simpletransformers) library on Colab, using free GPU. ## Eval results Final perplexity reached was 49.56, train loss: 3.336 ### BibTeX entry and citation info ```bibtex @inproceedings{Abed Khooli, year={2023} }
BhanuSama/gpt2-finetuned-xsum
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-04T06:42:28Z
--- language: en thumbnail: http://www.huggingtweets.com/aenish_shrestha/1672814587662/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1282235612456615942/xYG0OPgE_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Aenish Shrestha</div> <div style="text-align: center; font-size: 14px;">@aenish_shrestha</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Aenish Shrestha. | Data | Aenish Shrestha | | --- | --- | | Tweets downloaded | 503 | | Retweets | 26 | | Short tweets | 127 | | Tweets kept | 350 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ay04mz44/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aenish_shrestha's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/38vmz2lc) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/38vmz2lc/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/aenish_shrestha') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Bharathdamu/wav2vec2-large-xls-r-300m-hindi-colab
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon - landscape pipeline_tag: other widget: - text: isometric scspace terrain datasets: - wdcqc/starcraft-remastered-melee-maps --- # DreamBooth model for Starcraft:Remastered terrain This is a Stable Diffusion model fine-tuned on Starcraft terrain images on the Space Platform tileset with DreamBooth. It can be used by adding the `instance_prompt`: **isometric scspace terrain** It was trained on 32x32 terrain images from 265 melee maps including original Blizzard maps and those downloaded from Battle.net, scmscx.com and broodwarmaps.net. Run it on Huggingface Spaces: https://huggingface.co/spaces/wdcqc/wfd Or use this notebook on Colab: https://colab.research.google.com/github/wdcqc/WaveFunctionDiffusion/blob/remaster/colab/WaveFunctionDiffusion_Demo.ipynb In addition to Dreambooth, a custom VAE model (`AutoencoderTile`) is trained to encode and decode the latents to/from tileset probabilities ("waves") and then generated as Starcraft maps. A WFC Guidance, inspired by the Wave Function Collapse algorithm, is also added to the pipeline. For more information about guidance please see this page: [Fine-Tuning, Guidance and Conditioning](https://github.com/huggingface/diffusion-models-class/tree/main/unit2) This model was created as part of the DreamBooth Hackathon. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on starcraft terrain images for the landscape theme. GitHub: https://github.com/wdcqc/WaveFunctionDiffusion ## Usage First clone the git repository: ```bash git clone https://github.com/wdcqc/WaveFunctionDiffusion.git ``` Then create a Jupyter notebook under the repository folder: ```python # Load pipeline from wfd.wf_diffusers import WaveFunctionDiffusionPipeline from wfd.wf_diffusers import AutoencoderTile wfc_data_path = "tile_data/wfc/platform_32x32.npz" # Use CUDA (otherwise it will take 15 minutes) device = "cuda" tilenet = AutoencoderTile.from_pretrained( "wdcqc/starcraft-platform-terrain-32x32", subfolder="tile_vae" ).to(device) pipeline = WaveFunctionDiffusionPipeline.from_pretrained( "wdcqc/starcraft-platform-terrain-32x32", tile_vae = tilenet, wfc_data_path = wfc_data_path ) pipeline.to(device) # Generate pipeline output # need to include the dreambooth keyword "isometric scspace terrain" pipeline_output = pipeline( "isometric scspace terrain, corgi", num_inference_steps = 50, wfc_guidance_start_step = 20, wfc_guidance_strength = 5, wfc_guidance_final_steps = 20, wfc_guidance_final_strength = 10, ) image = pipeline_output.images[0] # Display raw generated image from IPython.display import display display(image) # Display generated image as tiles wave = pipeline_output.waves[0] tile_result = wave.argmax(axis=2) from wfd.scmap import demo_map_image display(demo_map_image(tile_result, wfc_data_path = wfc_data_path)) # Generate map file from wfd.scmap import tiles_to_scx import random, time tiles_to_scx( tile_result, "outputs/generated_{}_{:04d}.scx".format(time.strftime("%Y%m%d_%H%M%S"), random.randint(0, 1e4)), wfc_data_path = wfc_data_path ) # Open the generated map file in `outputs` folder with Scmdraft 2 ```
Bharathdamu/wav2vec2-large-xls-r-300m-hindi
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2023-01-04T06:50:13Z
--- widget: - text: "Chiều 3/1, Đoàn công tác của Báo Nhân Dân do đồng chí Lê Quốc Minh, Ủy viên Trung ương Đảng, Tổng Biên tập Báo Nhân Dân, Phó Trưởng Ban Tuyên giáo Trung ương, Chủ tịch Hội Nhà báo Việt Nam làm Trưởng đoàn đã có buổi làm việc với lãnh đạo tỉnh Tuyên Quang." inference: false tags: - named-entity-recognition language: - vi model-index: - name: lsg-ner-vietnamese-electra-base-1024 results: [] --- # LSG ner vietnamese electra base model with max input length of 1024 A LSG version with extended input length based on [NlpHUST/ner-vietnamese-electra-base](https://huggingface.co/NlpHUST/ner-vietnamese-electra-base) and [LSG Attention](https://arxiv.org/abs/2210.15497).\ Remember to add trust_remote_code=True option while loading the model. ## Usage Fill mask example: ```python: from transformers import AutoModelForTokenClassification, AutoTokenizer, pipeline model = AutoModelForTokenClassification.from_pretrained("nguyendangsonlam/lsg-ner-vietnamese-electra-base-1024", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("nguyendangsonlam/lsg-ner-vietnamese-electra-base-1024") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Chiều 3/1, Đoàn công tác của Báo Nhân Dân do đồng chí Lê Quốc Minh, Ủy viên Trung ương Đảng, Tổng Biên tập Báo Nhân Dân, Phó Trưởng Ban Tuyên giáo Trung ương, Chủ tịch Hội Nhà báo Việt Nam làm Trưởng đoàn đã có buổi làm việc với lãnh đạo tỉnh Tuyên Quang." ner_results = nlp(example) print(ner_results) ```
Bharathdamu/wav2vec2-large-xls-r-300m-hindi3-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-04T07:06:36Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: HateXplain-first-annotator results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HateXplain-first-annotator This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8646 - Accuracy: 0.6065 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Bharathdamu/wav2vec2-model-hindibhasha
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-04T07:08:24Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1547 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2164 | 1.0 | 5533 | 1.1486 | | 0.9546 | 2.0 | 11066 | 1.1251 | | 0.7573 | 3.0 | 16599 | 1.1547 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
BigSalmon/FormalBerta3
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon - landscape widget: - text: a photo of sinha rock in the Kingdom of Greece --- # DreamBooth model for the sinha (Sigiriya rock) concept trained by hasarinduperera on the hasarinduperera/sigiriya-image-dataset dataset. This is a Stable Diffusion model fine-tuned on the sinha (Sigiriya rock) concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of sinha rock** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on Sigiriya `rock` images for the landscape theme. ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('hasarinduperera/sinha-rock') image = pipeline().images[0] image ```
BigSalmon/FormalRobertaaa
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: 8_koelectra_train_korquad-1_2_aihub results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 8_koelectra_train_korquad-1_2_aihub This model is a fine-tuned version of [monologg/koelectra-base-v3-discriminator](https://huggingface.co/monologg/koelectra-base-v3-discriminator) on the None dataset. It achieves the following results on the evaluation set: - Exact Match: 78.9613 - F1: 84.5790 - Loss: 0.8506 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Exact Match | F1 | Validation Loss | |:-------------:|:-----:|:------:|:-----------:|:-------:|:---------------:| | 1.5282 | 0.33 | 4000 | 64.4807 | 72.3547 | 1.4717 | | 1.0638 | 0.67 | 8000 | 72.1118 | 79.0236 | 1.0938 | | 1.0703 | 1.0 | 12000 | 74.0859 | 80.5242 | 1.0459 | | 0.9242 | 1.34 | 16000 | 75.1325 | 81.4470 | 0.9775 | | 0.9312 | 1.67 | 20000 | 75.6492 | 81.7357 | 0.9707 | | 0.9483 | 2.01 | 24000 | 76.2189 | 82.3461 | 0.9248 | | 0.8454 | 2.34 | 28000 | 76.8813 | 82.9913 | 0.9268 | | 0.8541 | 2.67 | 32000 | 77.1330 | 83.1591 | 0.9004 | | 0.8647 | 3.01 | 36000 | 77.1860 | 83.1519 | 0.8911 | | 0.8952 | 3.34 | 40000 | 77.1993 | 83.1777 | 0.8765 | | 0.7345 | 3.68 | 44000 | 77.3450 | 83.4184 | 0.9365 | | 0.708 | 4.01 | 48000 | 77.8617 | 83.7737 | 0.8599 | | 0.7217 | 4.34 | 52000 | 77.8352 | 83.6681 | 0.8770 | | 0.817 | 4.68 | 56000 | 77.9809 | 83.8054 | 0.8730 | | 0.7655 | 5.01 | 60000 | 78.0207 | 83.8704 | 0.8623 | | 0.7276 | 5.35 | 64000 | 78.2989 | 84.0245 | 0.8535 | | 0.6739 | 5.68 | 68000 | 78.2724 | 84.0880 | 0.8726 | | 0.652 | 6.02 | 72000 | 78.5639 | 84.2059 | 0.8657 | | 0.6615 | 6.35 | 76000 | 78.3254 | 84.1279 | 0.8623 | | 0.6624 | 6.68 | 80000 | 78.7493 | 84.4215 | 0.8525 | | 0.707 | 7.02 | 84000 | 78.5374 | 84.2300 | 0.8486 | | 0.8086 | 7.35 | 88000 | 78.3519 | 84.1909 | 0.8442 | | 0.6347 | 7.69 | 92000 | 78.6963 | 84.4347 | 0.8760 | | 0.702 | 8.02 | 96000 | 78.9083 | 84.6330 | 0.8418 | | 0.6618 | 8.36 | 100000 | 78.7493 | 84.5021 | 0.8672 | | 0.6294 | 8.69 | 104000 | 78.5374 | 84.3771 | 0.8770 | | 0.5797 | 9.02 | 108000 | 78.5904 | 84.3051 | 0.8623 | | 0.6073 | 9.36 | 112000 | 78.9216 | 84.6703 | 0.8638 | | 0.6717 | 9.69 | 116000 | 78.9613 | 84.5790 | 0.8506 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.7.1 - Datasets 2.7.0 - Tokenizers 0.13.2
BigSalmon/Neo
[ "pytorch", "gpt_neo", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
2023-01-04T10:34:28Z
--- language: - hu tags: - text-generation license: cc-by-nc-4.0 widget: - text: "Elmesélek egy történetet a nyelvtechnológiáról." --- # PULI GPT-2 For further details, see [our demo site](https://juniper.nytud.hu/demo/gpt2). - Hungarian GPT-2 model - Trained with Megatron-DeepSpeed [github](https://github.com/microsoft/Megatron-DeepSpeed) - Dataset: 36.3 billion words - Checkpoint: 500 000 steps ## Limitations - max_seq_length = 1024 ## Citation If you use this model, please cite the following paper: ``` @inproceedings {yang-puli, title = {Jönnek a nagyok! BERT-Large, GPT-2 és GPT-3 nyelvmodellek magyar nyelvre}, booktitle = {XIX. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2023)}, year = {2023}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Hungary}, author = {Yang, Zijian Győző and Dodé, Réka and Ferenczi, Gergő and Héja, Enikő and Jelencsik-Mátyus, Kinga and Kőrös, Ádám and Laki, László János and Ligeti-Nagy, Noémi and Vadász, Noémi and Váradi, Tamás}, pages = {247--262} } ``` ## Usage ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('NYTK/PULI-GPT-2') model = GPT2Model.from_pretrained('NYTK/PULI-GPT-2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Usage with pipeline ```python from transformers import pipeline prompt = "Elmesélek egy történetet a nyelvtechnológiáról." generator = pipeline(task="text-generation", model="NYTK/PULI-GPT-2") print(generator(prompt)[0]["generated_text"]) ```
BotterHax/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2023-01-04T13:11:16Z
--- language: - zh library_name: transformers pipeline_tag: text2text-generation --- ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("svjack/T5-dialogue-collect-v5") model = AutoModelForSeq2SeqLM.from_pretrained("svjack/T5-dialogue-collect-v5") text = ''' 根据下面的上下文进行分段: 上下文 他 喜欢 吃 汉堡 是 但 我 可 不 喜欢。 答案: ''' tokenizer.decode( model.generate( tokenizer.encode( text, return_tensors="pt", add_special_tokens=True ))[0], skip_special_tokens = True ) ''' '分段:他喜欢吃汉堡 分段:是的,但我可不喜欢。' ''' ```
Branex/gpt-neo-2.7B
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-15T08:25:44Z
--- language: - en tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - art - artistic - diffusers inference: true thumbnail: "https://i2.lensdump.com/i/TAxjOD.png" license: creativeml-openrail-m --- <center><h1><b><a href="https://huggingface.co/SweetLuna/Aurora"> Be sure to Check out Aurora 💛 - Luna </a></b></h1></center> # <h1 style="font-size: 4em; text-align: center; color:black; font-family: Segoe UI"> <a href="https://huggingface.co/SweetLuna/Kenshi/blob/main/README.md" style="text-decoration: none; background-color: transparent;">Kenshi</a> </h1> <a href="https://lensdump.com/i/RL8CTQ"><img src="https://i1.lensdump.com/i/RXYEm2.png" alt="RXYEm2.png" onclick="window.open('https://i1.lensdump.com/i/RXYEm2.png', '_blank')"></a> <h4 style="font-size: 1em; text-align: center;"><p style="color: black;">“Do I hide or do I roam? That indecision… Now the world has changed and I’ve missed it all.”</p></h1> --- ### <h1 style="font-size: 1.75em; font-family: Segoe UI">[FULLSCREEN](https://huggingface.co/SweetLuna/Kenshi/blob/main/README.md) | [Demo (Discord Server)](https://discord.gg/pD9MKyBgNp)</h1> <hr> ### <h1 style="font-size: 1.75em; font-family: Segoe UI">[CivitAI](https://civitai.com/models/3850) | [Download](https://huggingface.co/SweetLuna/Kenshi/tree/main/KENSHI%2001) | [Changelog](https://huggingface.co/SweetLuna/Kenshi/blob/main/Changelog.md)</h1> <hr> <style>▼-preamble { font-size: 2em; }</style> <details id="#contents"> <summary style="font-size: 2.25em; font-family: Segoe UI"><strong>🧧 Contents</strong></summary> <hr> # <h1 style="font-size: 1.5em;"><strong> - [🏮 Preamble](#▼-preamble)<p> - [⚙️ Usage](#▼-usage)<p> - [🎨 Versatility](#▼-versatility)<p> - [🥢 VAE [ IMPORTANT ! ]](#▼-vae)<p> - [🏔️ Examples Images ](#▼-sample) - [The Celestial ☄️](#▼-celestial) - [ChatGPT Prompt ⚙️](#▼-chatgpt) - [Vivid 🌈](#▼-vivid) - [Moon 🌙](#▼-moon)<p> - [🍣 Merge Recipes](#▼-merge)<p> - [💡 Suggestions](#▼-suggestions) - [Trigger Words](#trigger-words) - [WebUI](#webui) - [VAE](#vae) - [Embeddings](#embeddings)<p> - [💛 Donate](#▼-donation)<p> - [License](#license)<p> - [Disclaimer](#disclaimer) </strong> </h1> </details> <hr> <details id="▼-preamble"> <summary style="font-size: 2.25em; font-family: Segoe UI"><strong>🏮 What is Kenshi?</strong></summary> <hr> <h1> **Kenshi** is my personal merges which created by combining different models together. ***This includes models such as Nixeu, WLOP, Guweiz, BoChen, and many others.*** ```TypeScript My goal is to archive my own feelings towards styles I want for Semi-realistic artstyle. Through this process, I hope not only to gain a deeper understanding of my own preferences, but also to inform and refine the capabilities of my personal skills, and AI Art as it generates artwork that reflects my desired style. ``` Kenshi because it represents strength, resilience, and the ability to adapt and overcome challenges. Just like AI. </h1> </details> <hr> <details id="▼-usage"> <summary style="font-size: 2.25em; font-family: font-family: Segoe UI"><strong>⚙️ Usage</strong></summary> <hr> <h1> ## <h1 style="font-size: 1.5em; text-align: center; color:black; font-family: Segoe UI"> These are the settings I always use it is recommended but not essential; | Settings | Value | | ----------------- | ------------------------------------------------------------------ | | Steps | 20+ | | Sampler | DPM++ 2M Karras | | CFG scale | 2-7 | | Size |600x800 | | Clip skip | 2 | | ENSD | 31337 | | Hires Fix | Enabled | | Upscale by | 1.5 | | Upscaler Fix | https://de-next.owncube.com/index.php/s/x99pKzS7TNaErrC | | Hires Fix | Enabled | Kenshi is not recommended for new users since it requires a lot of prompt to work with I suggest using this if you still want to use the model (install it as an extension on Automatic1111 WebUI) : https://github.com/DominikDoom/a1111-sd-webui-tagcomplete </h1> </h1> <center><a href="https://i2.lensdump.com/i/TAbhx1.png"><img src="https://i2.lensdump.com/i/TAbhx1.png" alt="TAbhx1.png" onclick="window.open('https://i2.lensdump.com/i/TAbhx1.png', '_blank')"></a></center> </details> <hr> <details id="▼-versatility"> <summary style="font-size: 2.25em; font-family: font-family: Segoe UI"><strong>🎨 Versatility</strong></summary> <hr> <h1> ## Unlike most models, Kenshi is known for its versatility, able to perform various styles with remarkable results. I've undergone testing with over 30 to 50 styles and most of the time I get remarkable results. I recommend using Lora and Embedding to improve this even further. <center><a href="https://i2.lensdump.com/i/TAxjOD.png"><img src="https://i2.lensdump.com/i/TAxjOD.png" alt="TAxjOD.png" onclick="window.open('https://i2.lensdump.com/i/TAxjOD.png, '_blank')"></a></center> </details> <hr> <details id="▼-vae"> <summary style="font-size: 2.25em; font-family: font-family: Segoe UI"><strong>🥢 VAE ⚠️</strong></summary> <hr> <h1> ## I recommend <a href="https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt" >**kl-f8-anime2.ckpt**</a> VAE from waifu-diffusion-v1-4 <a href="https://huggingface.co/hakurei">which is made by hakurei.</a> </h1> <a href="https://i2.lensdump.com/i/RbBe37.png"><img src="https://i2.lensdump.com/i/RbBe37.png" alt="RbBe37.png" onclick="window.open('https://i2.lensdump.com/i/RbBe37.png', '_blank')"></a> # <h1 style="font-size: 2.5em;"><a href="https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt" >**VAE is important, please download it.**</h1></a> </details> <hr> <details id="▼-sample"> <summary style="font-size: 2.25em; font-family: Segoe UI"><strong>🏔️ Examples Images</strong></summary><hr> <details id="▼-celestial"> <summary style="font-size: 1.75em; font-family: monospace"><strong>The Celestial ☄️</strong></summary> <img src="https://i3.lensdump.com/i/RLEz8M.png" alt=”1”> <h1> ```c# 1girl, highly detailed face, bleak and dangerous atmosphere, moody, (dynamic pose:1.6), cataclysmic magic, dark blue wavy long hair, (glowing eyes:0.85), (reaching through a magic circle:1.35), extremely detailed 8k wallpaper, (highly detailed:1.1), [anime:Impasto:0.5], intricate, fantasy, clear sky, wind, beautiful sky, (nightsky), (galaxy), (huge blood moon in the background:1.05) ``` # **KENSHI 00** </details> <hr> <details id="▼-chatgpt"> <summary style="font-size: 1.75em; font-family: monospace"><strong>ChatGPT Prompt ⚙️</strong></summary> <img src="https://i.lensdump.com/i/RLkz3v.png" alt=”2”> <img src="https://i1.lensdump.com/i/RLkFND.png" alt=”3”> <img src="https://i3.lensdump.com/i/RLkulr.png" alt=”4”> ```c# (A cursed knight, clad in black armor,) must journey through a desolate, haunted land to reach the Elden Ring and lift the (curse that plagues their soul.)Along the way, they encounter other travelers, (each struggling with their own demons and secrets), As they draw closer to the Elden Ring, they are confronted with visions of their past mistakes, (all tinged with a red hue,) looking at viewer, highres, superb, 8k wallpaper, extremely detailed, intricate, unreal engine 5, volumetric lighting, realistic, realistic lighting, cinematic, 4k, cinematic lighting, 8k, depth of field, 3d, perfect, award-winning, hyper-detailed, photorealistic, ultra realistic, realistic light, hard lighting, intricate details, stop motion, hyperfocus, tonemapping, sharp focus, hyper detailed, detailed eyes, eyes focus, (illustration:1.1), highres, (extremely detailed CG unity 8k wallpaper:1.1), (beautiful face:1.15), (cowboy_shot:1.5) (nixeu_soft:0.7), (nixeu_white:0.7), ``` # **KENSHI 00** </details> <hr> <details id="▼-vivid"> <summary style="font-size: 1.75em; font-family: monospace"><strong>Vivid 🌈</strong></summary> <img src="https://i.lensdump.com/i/RXY1Fo.png" alt=”5”> ```c# close POV, young adult woman, blue purple green color palette, black hair with dark green shine, two symmetrical antennae on head, big blue eyes sparkling, rings around eyes, two-tone black and red, smiling at the camera, elegant pose, looking at the viewer, vivid stained glass window background, oil painting, character portrait, drawn in medibang paint, 4k wallpaper, aesthetic, masterpiece, award-winning photography, macro photography vivid colors, photorealistic, atmospheric, cinematic, moody, rule of thirds, majestic, detailed, perfect anatomy cowboy shot, contrapposto, looking at viewer, highres, superb, 8k wallpaper, extremely detailed, intricate, unreal engine 5, volumetric lighting, realistic, realistic lighting, cinematic, 4k, cinematic lighting, 8k, depth of field, 3d, masterpiece, perfect, award-winning, hyper-detailed, photorealistic, ultra realistic, realistic light, hard lighting, intricate details, stop motion, hyperfocus, tonemapping, sharp focus, hyper detailed, detailed eyes, eyes focus, (illustration:1.1), highres, (extremely detailed CG unity 8k wallpaper:1.1), (mid shot1.25), (portrait:1.25), (solo:1.2), 1girl, (beautiful face:1.15), (nixeu_soft:0.7), (nixeu_white:0.7), ``` # **KENSHI 01** </details> <hr> <details id="▼-moon"> <summary style="font-size: 1.75em; font-family: monospace"><strong>Moon 🌙</strong></summary> <img src="https://i2.lensdump.com/i/RXYt7i.png" alt=”6”> ```c# (on the moon, space, looking back into earth), white hair, black tank top, volumetric lighting, white jacket, glowing headphone, cyberpunk, futuristic, multi-color eyes, detailed eyes, hyper detailed,light smile, highly detailed, beautiful, small details, ultra detailed, best quality, intricate, hyperrealism, sharp, digital illustration, detailed, realism, intricate, 4k, 8k, trending on artstation, good anatomy, beautiful lighting, award-winning, photorealistic, realistic shadows, realistic lighting, beautiful lighting, raytracing, intricate details, moody, rule of thirds, masterpiece, (illustration:1.1), highres, (extremely detailed CG, unity, 8k wallpaper:1.1), beautiful face, highly detailed face, ultra realistic, masterpiece, bokeh, extremely detailed, intricate, zoomout, colorful, vibrant colors, red nail polish, side view, ``` # **KENSHI 01** </details> </details> <hr> </h1> <details id="▼-merge"> <summary style="font-size: 2.25em; font-family: Segoe UI"><strong>🍣 Merge Recipes</strong></summary> <hr> <h1><strong> <a href=" https://www.figma.com/file/aESyZAxHxBJjE63gog5ExZ/KENSHI?node-id=0%3A1&t=2ULQWeLUSIWhk1aE-0" class="no-underline" style="font-size: 1.75em;">Here is my Cookbook that you can check out. <img src="https://i2.lensdump.com/i/RLCJIH.png" alt="COOKBOOK"></strong> </h1> </a> </details> <hr> <details id="▼-donation"> <summary style="font-size: 2.25em; font-family: Segoe UI"><strong>💛 Donate</strong></summary> <hr> <h1><strong> I've been working hard to complete my college education. The thing is, paying for college is no joke and I've been feeling the pressure of the mounting bills. I know times are tough for everyone, but if you're able to help in any way, I would be forever grateful. Considering supporting me on <a href="https://www.patreon.com/thesweetluna">Patreon</a> </h1> </a> </details> <hr> <details id="▼-suggestions"> <summary style="font-size: 2.25em; font-family: Segoe UI"><strong>💡 Suggestions</strong></summary> <hr> ## <h1 style="font-size: 1.75em;">Trigger Words</h1> <hr> <h1 style="font-size: 1.5em;"> **Trigger Words are not required** but are meant to **enhance the effectiveness of the prompt** and improve the overall outcome. ```c# WLOP, Nixeu, Guweiz ``` </h1> <hr> ## <h1 style="font-size: 1.75em;">WebUI</h1> <hr> <h1 style="font-size: 1.5em;"> <a href="https://github.com/AUTOMATIC1111/stable-diffusion-webui">AUTOMATIC1111</a> Grab it, a must-have. Have all the features you want and is easy to access. <hr> </h1> ## <h1 style="font-size: 1.75em;">Embeddings</h1> <hr> <h1 style="font-size: 1.5em;"> I recommend grabbing ***all*** <a href="https://huggingface.co/Nerfgun3">Nerfgun3</a> embeddings ***and*** Sirveggie <a href="https://huggingface.co/SirVeggie/nixeu_embeddings">nixeu_embeddings</a> </h1> </details> <hr> # License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: ``` 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against theprovisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) ``` [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) <hr> # Disclaimer ```c# The use of this learning model is entirely at the discretion of the user, and they have the freedom to choose whether or not to create NSFW content. This is important to note that the model itself does not contain any explicit or inappropriate imagery that can be easily accessed with a single click. The purpose of sharing this model is not to showcase obscene material in a public forum, but rather to provide a tool for users to utilize as they see fit. The decision of whether to engage with SFW or NSFW content lies with the user and their own personal preferences. ```
CAMeL-Lab/bert-base-arabic-camelbert-ca
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
580
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 605 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 605, "warmup_steps": 61, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
CAMeL-Lab/bert-base-arabic-camelbert-da-poetry
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:1905.05700", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
37
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3-v2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Pablinsv/q-Taxi-v3-v2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
2023-01-04T14:49:50Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Cartpolefinalfinaletest results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 92.80 +/- 31.78 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:1905.05700", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
2023-01-04T14:58:42Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 298.53 +/- 18.94 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
855
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: transformers-abhi results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # transformers-abhi This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.9227 - Validation Loss: 2.5929 - Train Rougel: tf.Tensor(0.19853836, shape=(), dtype=float32) - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Rougel | Epoch | |:----------:|:---------------:|:----------------------------------------------:|:-----:| | 2.9227 | 2.5929 | tf.Tensor(0.19853836, shape=(), dtype=float32) | 0 | ### Framework versions - Transformers 4.20.0 - TensorFlow 2.9.2 - Datasets 2.8.0 - Tokenizers 0.12.1
CAMeL-Lab/bert-base-arabic-camelbert-mix
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "Arabic", "Dialect", "Egyptian", "Gulf", "Levantine", "Classical Arabic", "MSA", "Modern Standard Arabic", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20,880
null
--- language: en tags: - distilroberta widget: - text: animal - text: love - text: oh happy day ---
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
75
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.87 - name: F1 type: f1 value: 0.8695652173913044 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3142 - Accuracy: 0.87 - F1: 0.8696 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
52
null
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-finetune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetune This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5677 - Rouge1: 9.9893 - Rouge2: 5.2818 - Rougel: 9.7766 - Rougelsum: 9.7951 - Gen Len: 58.1672 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.2639 | 1.0 | 4774 | 1.5677 | 9.9893 | 5.2818 | 9.7766 | 9.7951 | 58.1672 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu116 - Datasets 2.7.0 - Tokenizers 0.13.2
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
133
null
--- license: cc-by-4.0 tags: - generated_from_trainer model-index: - name: CTEBMSP_ner_test2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CTEBMSP_ner_test2 This model is a fine-tuned version of [chizhikchi/Spanish_disease_finder](https://huggingface.co/chizhikchi/Spanish_disease_finder) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0586 - Diso Precision: 0.8836 - Diso Recall: 0.8902 - Diso F1: 0.8869 - Diso Number: 4052 - Overall Precision: 0.8836 - Overall Recall: 0.8902 - Overall F1: 0.8869 - Overall Accuracy: 0.9885 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Diso Precision | Diso Recall | Diso F1 | Diso Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------------:|:-----------:|:-------:|:-----------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.0463 | 1.0 | 2566 | 0.0512 | 0.8791 | 0.8384 | 0.8583 | 4052 | 0.8791 | 0.8384 | 0.8583 | 0.9859 | | 0.0204 | 2.0 | 5132 | 0.0615 | 0.8942 | 0.8655 | 0.8796 | 4052 | 0.8942 | 0.8655 | 0.8796 | 0.9875 | | 0.0095 | 3.0 | 7698 | 0.0545 | 0.8877 | 0.8776 | 0.8826 | 4052 | 0.8877 | 0.8776 | 0.8826 | 0.9881 | | 0.0045 | 4.0 | 10264 | 0.0586 | 0.8836 | 0.8902 | 0.8869 | 4052 | 0.8836 | 0.8902 | 0.8869 | 0.9885 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
CAUKiel/JavaBERT
[ "pytorch", "safetensors", "bert", "fill-mask", "code", "arxiv:2110.10404", "arxiv:1910.09700", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
388
null
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon - wildcard widget: - text: photo of bergraffi futuristic cyberpunk portrait painted by van gogh --- # DreamBooth model for the bergraffi concept trained by bakebrain. This is a Stable Diffusion model fine-tuned on the bergraffi concept with DreamBooth. It can be used by modifying the `instance_prompt`: **photo of bergraffi futuristic cyberpunk portrait painted by van gogh** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on Berlin `graffiti` images. #### Sample Images 150+ generated images can be found here: https://imgur.com/a/0Q00Rq7 <table> <tr> <td align="center"><img src="https://huggingface.co/bakebrain/bergraffi-berlin-graffiti/resolve/main/sample_images/bergraffi_sample_1.png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/bakebrain/bergraffi-berlin-graffiti/resolve/main/sample_images/bergraffi_sample_2.png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/bakebrain/bergraffi-berlin-graffiti/resolve/main/sample_images/bergraffi_sample_3.png" style="height:200px"> </td> </tr> <tr> <td align="center"><img src="https://huggingface.co/bakebrain/bergraffi-berlin-graffiti/resolve/main/sample_images/bergraffi_sample_4.png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/bakebrain/bergraffi-berlin-graffiti/resolve/main/sample_images/bergraffi_sample_5.png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/bakebrain/bergraffi-berlin-graffiti/resolve/main/sample_images/bergraffi_sample_6.png" style="height:200px"> </td> </tr> <tr> <td align="center"><img src="https://huggingface.co/bakebrain/bergraffi-berlin-graffiti/resolve/main/sample_images/bergraffi_sample_7.png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/bakebrain/bergraffi-berlin-graffiti/resolve/main/sample_images/bergraffi_sample_8.png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/bakebrain/bergraffi-berlin-graffiti/resolve/main/sample_images/bergraffi_sample_9.png" style="height:200px"> </td> </tr> <tr> <td align="center"><img src="https://huggingface.co/bakebrain/bergraffi-berlin-graffiti/resolve/main/sample_images/bergraffi_sample_10.png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/bakebrain/bergraffi-berlin-graffiti/resolve/main/sample_images/bergraffi_sample_11.png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/bakebrain/bergraffi-berlin-graffiti/resolve/main/sample_images/bergraffi_sample_12.png" style="height:200px"> </td> </tr> <tr> <td align="center"><img src="https://huggingface.co/bakebrain/bergraffi-berlin-graffiti/resolve/main/sample_images/bergraffi_sample_13.png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/bakebrain/bergraffi-berlin-graffiti/resolve/main/sample_images/bergraffi_sample_14.png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/bakebrain/bergraffi-berlin-graffiti/resolve/main/sample_images/bergraffi_sample_15.png" style="height:200px"> </td> </tr> <tr> <td align="center"><img src="https://huggingface.co/bakebrain/bergraffi-berlin-graffiti/resolve/main/sample_images/bergraffi_sample_16.png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/bakebrain/bergraffi-berlin-graffiti/resolve/main/sample_images/bergraffi_sample_17.png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/bakebrain/bergraffi-berlin-graffiti/resolve/main/sample_images/bergraffi_sample_18.png" style="height:200px"> </td> </tr> </table> ## Usage Experiment with the guidance scale! Enjoy! ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('bakebrain/bergraffi-berlin-graffiti') prompt = "photo of bergraffi futuristic cyberpunk portrait painted by van gogh" guidance_scale = 12 image = pipeline(prompt, guidance_scale=guidance_scale).images[0] image ```
CBreit00/DialoGPT_small_Rick
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Md results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction