modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1_squad2.0
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2023-02-13T21:00:32Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: whathefish/my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whathefish/my_awesome_model This model is a fine-tuned version of [distilbert-base-german-cased](https://huggingface.co/distilbert-base-german-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5066 - Validation Loss: 0.7136 - Train Accuracy: 0.5967 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 435, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.6782 | 0.6667 | 0.5867 | 0 | | 0.6154 | 0.6703 | 0.5883 | 1 | | 0.5066 | 0.7136 | 0.5967 | 2 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.9.0 - Datasets 2.9.0 - Tokenizers 0.13.2
AnonymousSub/rule_based_only_classfn_twostage_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 384.50 +/- 199.43 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Luca77 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Luca77 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Luca77 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
2023-02-13T21:25:27Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 248.59 +/- 16.04 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: - en tags: - pytorch - causal-lm - pythia license: apache-2.0 datasets: - EleutherAI/the_pile_deduplicated --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research. It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. We also provide 154 intermediate checkpoints per model, hosted on Hugging Face as branches. The Pythia model suite was designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. <details> <summary style="font-weight:600">Details on previous early release and naming convention.</summary> Previously, we released an early version of the Pythia suite to the public. However, we decided to retrain the model suite to address a few hyperparameter discrepancies. This model card <a href="#changelog">lists the changes</a>; see appendix B in the Pythia paper for further discussion. We found no difference in benchmark performance between the two Pythia versions. The old models are [still available](https://huggingface.co/models?other=pythia_v0), but we suggest the retrained suite if you are just starting to use Pythia.<br> **This is the current release.** Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. </details> <br> # Pythia-410M-deduped ## Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:[email protected]). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ## Uses and Limitations ### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. We also provide 154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints `step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to `step143000`. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-410M-deduped for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please conduct your own risk and bias assessment. ### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-410M-deduped has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means XNPythia-410M-dedupedAME will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions. ### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token used by the model need not produce the most “accurate” text. Never rely on Pythia-410M-deduped to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-410M-deduped may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-410M-deduped. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ## Training ### Training data Pythia-410M-deduped was trained on the Pile **after the dataset has been globally deduplicated**.<br> [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). ### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training, from `step1000` to `step143000` (which is the same as `main`). In addition, we also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for 143000 steps at a batch size of 2M (2,097,152 tokens).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ## Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Easy Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/> </details> ## Changelog This section compares differences between previously released [Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current models. See Appendix B of the Pythia paper for further discussion of these changes and the motivation behind them. We found that retraining Pythia had no impact on benchmark performance. - All model sizes are now trained with uniform batch size of 2M tokens. Previously, the models of size 160M, 410M, and 1.4B parameters were trained with batch sizes of 4M tokens. - We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64, 128,256,512} in addition to every 1000 training steps. - Flash Attention was used in the new retrained suite. - We remedied a minor inconsistency that existed in the original suite: all models of size 2.8B parameters or smaller had a learning rate (LR) schedule which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and 12B models all used an LR schedule which decayed to a minimum LR of 0. In the redone training runs, we rectified this inconsistency: all models now were trained with LR decaying to a minimum of 0.1× their maximum LR. ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
AnonymousSub/rule_based_roberta_only_classfn_epochs_1_shard_1_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2023-02-13T22:42:20Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 237.86 +/- 30.05 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AnonymousSub/rule_based_roberta_twostage_quadruplet_epochs_1_shard_1_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.46 +/- 2.78 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="GrimReaperSam/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1_wikiqa
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
23
null
--- license: mit --- This repository contain the average embedding for the direction of the entity `cat` for the model `CompVis/stable-diffusion-v1-4` as the file `cad_sd14.pt`, that can be used as a direction for [pix2pix-zero](https://github.com/pix2pixzero/pix2pix-zero), check out its [Hugging Face Space](#) It was formed by averaging the CLIP embeddings of the text-encoder of `CompVis/stable-diffusion-v1-4` for the following sentences: - A cat washes itself. - A cat watching birds at a window. - A cat licking its paw.
AnonymousSub/rule_based_twostage_quadruplet_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 253.68 +/- 52.07 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AnonymousSub/rule_based_twostagetriplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.931799370965072 - name: Recall type: recall value: 0.9473241332884551 - name: F1 type: f1 value: 0.9394976216306434 - name: Accuracy type: accuracy value: 0.9857538117383882 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0629 - Precision: 0.9318 - Recall: 0.9473 - F1: 0.9395 - Accuracy: 0.9858 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0872 | 1.0 | 1756 | 0.0716 | 0.9122 | 0.9285 | 0.9203 | 0.9814 | | 0.0335 | 2.0 | 3512 | 0.0631 | 0.9257 | 0.9456 | 0.9356 | 0.9854 | | 0.0164 | 3.0 | 5268 | 0.0629 | 0.9318 | 0.9473 | 0.9395 | 0.9858 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
AnonymousSub/unsup-consert-emanuals
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: openai/whisper-medium results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_myst type: rishabhjain16/infer_myst config: en split: test metrics: - type: wer value: 12.22 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pfs type: rishabhjain16/infer_pfs config: en split: test metrics: - type: wer value: 2.98 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_cmu_9h type: rishabhjain16/infer_cmu_9h config: en split: test metrics: - type: wer value: 16.05 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/libritts_dev_clean type: rishabhjain16/libritts_dev_clean config: en split: test metrics: - type: wer value: 5.4 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pf_italian type: rishabhjain16/infer_pf_italian config: en split: test metrics: - type: wer value: 14.08 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pf_german type: rishabhjain16/infer_pf_german config: en split: test metrics: - type: wer value: 51.53 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pf_swedish type: rishabhjain16/infer_pf_swedish config: en split: test metrics: - type: wer value: 16.52 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_so_chinese type: rishabhjain16/infer_so_chinese config: en split: test metrics: - type: wer value: 22.8 name: WER --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openai/whisper-medium This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3896 - Wer: 200.1910 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2328 | 0.12 | 500 | 0.2655 | 301.5949 | | 0.1838 | 1.11 | 1000 | 0.2496 | 286.1977 | | 0.1757 | 2.1 | 1500 | 0.2563 | 118.9213 | | 0.0254 | 3.09 | 2000 | 0.2992 | 237.0841 | | 0.0282 | 4.07 | 2500 | 0.3342 | 125.1999 | | 0.0229 | 5.06 | 3000 | 0.3502 | 268.7414 | | 0.0027 | 6.05 | 3500 | 0.3918 | 107.5536 | | 0.003 | 7.03 | 4000 | 0.3896 | 200.1910 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.9.1.dev0 - Tokenizers 0.13.2
Anonymreign/savagebeta
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: lancechen/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Anthos23/my-awesome-model
[ "pytorch", "tf", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- license: apache-2.0 language: - en tags: - stable diffusion - open-prompts ---
Anupam/QuestionClassifier
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: t5-small-SA results: [] pipeline_tag: summarization --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2847 - Rouge1: 0.1422 - Rouge2: 0.0403 - Rougel: 0.1337 - Rougelsum: 0.1342 - Gen Len: 8.4248 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 2.7269 | 1.0 | 527 | 1.5826 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.5708 | 2.0 | 1054 | 1.4112 | 0.035 | 0.0105 | 0.0357 | 0.0349 | 1.7168 | | 1.4796 | 3.0 | 1581 | 1.3644 | 0.1012 | 0.0167 | 0.0948 | 0.0942 | 8.2212 | | 1.3451 | 4.0 | 2108 | 1.3399 | 0.126 | 0.0205 | 0.1183 | 0.1182 | 9.0088 | | 1.3491 | 5.0 | 2635 | 1.3247 | 0.1307 | 0.0266 | 0.1232 | 0.1236 | 8.0088 | | 1.3109 | 6.0 | 3162 | 1.3112 | 0.1428 | 0.0325 | 0.1332 | 0.1334 | 7.6549 | | 1.2462 | 7.0 | 3689 | 1.3046 | 0.1435 | 0.0319 | 0.1342 | 0.1349 | 7.885 | | 1.2353 | 8.0 | 4216 | 1.2937 | 0.1404 | 0.0313 | 0.1297 | 0.1303 | 9.1239 | | 1.2838 | 9.0 | 4743 | 1.2903 | 0.1434 | 0.0372 | 0.1338 | 0.1344 | 8.1062 | | 1.2317 | 10.0 | 5270 | 1.2870 | 0.1459 | 0.0421 | 0.1388 | 0.1389 | 8.4248 | | 1.2598 | 11.0 | 5797 | 1.2857 | 0.1421 | 0.0403 | 0.1346 | 0.1351 | 8.2389 | | 1.1579 | 12.0 | 6324 | 1.2847 | 0.1422 | 0.0403 | 0.1337 | 0.1342 | 8.4248 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.0 - Tokenizers 0.13.2
ArBert/bert-base-uncased-finetuned-ner-agglo
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: mt5-small-finetuned-14feb-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-14feb-1 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4516 - Rouge1: 20.33 - Rouge2: 6.2 - Rougel: 19.9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000275 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 4.0401 | 1.0 | 388 | 2.5481 | 16.31 | 4.7 | 16.1 | | 2.9776 | 2.0 | 776 | 2.4442 | 17.25 | 4.93 | 16.93 | | 2.7362 | 3.0 | 1164 | 2.4181 | 19.73 | 5.74 | 19.21 | | 2.5767 | 4.0 | 1552 | 2.4071 | 19.37 | 5.62 | 18.89 | | 2.4466 | 5.0 | 1940 | 2.3560 | 18.98 | 5.94 | 18.55 | | 2.3402 | 6.0 | 2328 | 2.3923 | 20.45 | 5.5 | 20.03 | | 2.2385 | 7.0 | 2716 | 2.3639 | 20.03 | 5.96 | 19.76 | | 2.1663 | 8.0 | 3104 | 2.3431 | 19.17 | 5.34 | 18.84 | | 2.0849 | 9.0 | 3492 | 2.4008 | 19.97 | 5.58 | 19.67 | | 2.0203 | 10.0 | 3880 | 2.3948 | 19.67 | 5.75 | 19.26 | | 1.9653 | 11.0 | 4268 | 2.3915 | 20.06 | 6.07 | 19.61 | | 1.9067 | 12.0 | 4656 | 2.4025 | 20.83 | 6.46 | 20.41 | | 1.8592 | 13.0 | 5044 | 2.4194 | 19.97 | 6.4 | 19.69 | | 1.8158 | 14.0 | 5432 | 2.4156 | 19.87 | 6.16 | 19.38 | | 1.7679 | 15.0 | 5820 | 2.4053 | 19.9 | 5.99 | 19.52 | | 1.748 | 16.0 | 6208 | 2.4156 | 19.68 | 5.81 | 19.28 | | 1.7198 | 17.0 | 6596 | 2.4306 | 20.0 | 6.26 | 19.63 | | 1.6959 | 18.0 | 6984 | 2.4499 | 19.1 | 6.19 | 18.82 | | 1.6769 | 19.0 | 7372 | 2.4536 | 20.62 | 6.3 | 20.15 | | 1.6682 | 20.0 | 7760 | 2.4516 | 20.33 | 6.2 | 19.9 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
ArBert/roberta-base-finetuned-ner-kmeans
[ "pytorch", "tensorboard", "roberta", "token-classification", "dataset:conll2003", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: - en tags: - stable-diffusion - text-to-image - lora license: creativeml-openrail-m inference: false --- # Katanagatari Style LoRA ## Usage To use this LoRA you have to download the file, as well as drop it into the "\stable-diffusion-webui\models\Lora" folder To use it in a prompt, please refer to the extra networks panel in your Automatic1111 webui. As it's slightly overtrained, I highly recommend using it at anywhere between 0.3 to 0.8 strength for the best results. If you want to use it at higher strength, you'll need to put a lot of weighting on any details in your prompt that weren't in the training data. This LoRA was inspired by the art style of the 2010 anime Katanagatari, and I'd highly recommend you check it out [here](https://myanimelist.net/anime/6594/Katanagatari). Have fun :) ## Example Pictures <table> <tr> <td><img src=https://i.imgur.com/6OJ3Psh.png width=50% height=100%/></td> </tr> <tr> <td><img src=https://i.imgur.com/HqGXdmr.png width=50% height=100%/></td> </tr> <tr> <td><img src=https://i.imgur.com/EsrPIch.png width=50% height=100%/></td> </tr> </table> ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Araby/Arabic-TTS
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - amazon_polarity pipeline_tag: text-classification --- A fine tuned-DistilBERT model used for sentiment analysis, fine tuned using an amazon reviews dataset. Take a look at inference.py to see example of how inference works.
Arnold/common_voiceha
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en tags: - pytorch - causal-lm - pythia license: apache-2.0 datasets: - the_pile --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research. It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. We also provide 154 intermediate checkpoints per model, hosted on Hugging Face as branches. The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. <details> <summary style="font-weight:600">Details on previous early release and naming convention.</summary> Previously, we released an early version of the Pythia suite to the public. However, we decided to retrain the model suite to address a few hyperparameter discrepancies. This model card <a href="#changelog">lists the changes</a>; see appendix B in the Pythia paper for further discussion. We found no difference in benchmark performance between the two Pythia versions. The old models are [still available](https://huggingface.co/models?other=pythia_v0), but we suggest the retrained suite if you are just starting to use Pythia.<br> **This is the current release.** Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. </details> <br> # Pythia-6.9B ## Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:[email protected]). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ## Uses and Limitations ### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. We also provide 154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints `step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to `step143000`. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-6.9B for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-6.9B as a basis for your fine-tuned model, please conduct your own risk and bias assessment. ### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-6.9B has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-6.9B will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions. ### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token used by the model need not produce the most “accurate” text. Never rely on Pythia-6.9B to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-6.9B may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-6.9B. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ## Training ### Training data [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/).<br> The Pile was **not** deduplicated before being used to train Pythia-6.9B. ### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training, from `step1000` to `step143000` (which is the same as `main`). In addition, we also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for 143000 steps at a batch size of 2M (2,097,152 tokens).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ## Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Easy Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/> </details> ## Changelog This section compares differences between previously released [Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current models. See Appendix B of the Pythia paper for further discussion of these changes and the motivation behind them. We found that retraining Pythia had no impact on benchmark performance. - All model sizes are now trained with uniform batch size of 2M tokens. Previously, the models of size 160M, 410M, and 1.4B parameters were trained with batch sizes of 4M tokens. - We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64, 128,256,512} in addition to every 1000 training steps. - Flash Attention was used in the new retrained suite. - We remedied a minor inconsistency that existed in the original suite: all models of size 2.8B parameters or smaller had a learning rate (LR) schedule which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and 12B models all used an LR schedule which decayed to a minimum LR of 0. In the redone training runs, we rectified this inconsistency: all models now were trained with LR decaying to a minimum of 0.1× their maximum LR. ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
Arnold/wav2vec2-hausa-demo-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: model1-thesis-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model1-thesis-5 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6273 - Precision: 0.4620 - Recall: 0.6348 - F1: 0.5348 - Accuracy: 0.8196 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 29 | 0.5895 | 0.3871 | 0.6261 | 0.4784 | 0.8086 | | No log | 2.0 | 58 | 0.5814 | 0.4424 | 0.6348 | 0.5214 | 0.8118 | | No log | 3.0 | 87 | 0.5734 | 0.4360 | 0.6522 | 0.5226 | 0.8332 | | No log | 4.0 | 116 | 0.6326 | 0.4808 | 0.6522 | 0.5535 | 0.8170 | | No log | 5.0 | 145 | 0.6273 | 0.4620 | 0.6348 | 0.5348 | 0.8196 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Arnold/wav2vec2-large-xlsr-turkish-demo-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: dfm794/poca-SoccerTwos-2x-2-r-l 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Aron/distilbert-base-uncased-finetuned-emotion
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:emotion", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
36
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: nhiro3303/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Arpita/opus-mt-en-ro-finetuned-synthon-to-reactant
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
The RSE-BERT-base-10-rel is trained with 10 relations including: 1) entailment 2) contradiction 3) neutral 4) duplicate_question 5) non_duplicate_question 6) paraphrase 7) same_caption 8) qa_entailment 9) qa_not_entailment 10) same_sent The BERT-base-uncased model is used as initialization. It can be used to infer all ten different relations.
ArseniyBolotin/bert-multi-PAD-ner
[ "pytorch", "jax", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -3.43 +/- 0.46 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Ashim/dga-transformer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-medium-toi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-medium-toi This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0796 - Wer: 35.2601 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 2.6522 | 0.24 | 500 | 2.0369 | 75.7050 | | 0.9481 | 0.48 | 1000 | 1.3940 | 48.5549 | | 0.6936 | 0.72 | 1500 | 1.2731 | 44.5262 | | 0.6486 | 0.96 | 2000 | 1.1436 | 40.5500 | | 0.6288 | 1.2 | 2500 | 1.1495 | 38.6057 | | 0.5257 | 1.44 | 3000 | 1.1033 | 37.1519 | | 0.4218 | 1.68 | 3500 | 1.0615 | 36.3461 | | 0.4935 | 1.92 | 4000 | 1.0796 | 35.2601 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Ateeb/asd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: EdenYav/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Augustvember/wokka4
[ "conversational" ]
conversational
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_keras_callback model-index: - name: pretrained-m-bert-90 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # pretrained-m-bert-90 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 5.7094 - Validation Loss: 14.5332 - Epoch: 89 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-04, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 10.2413 | 10.9668 | 0 | | 7.5814 | 10.9638 | 1 | | 7.0095 | 11.3733 | 2 | | 6.4352 | 11.5989 | 3 | | 6.7137 | 11.4072 | 4 | | 6.4383 | 11.8287 | 5 | | 6.2223 | 12.0344 | 6 | | 6.1759 | 11.6900 | 7 | | 6.0764 | 11.7144 | 8 | | 5.8802 | 12.1089 | 9 | | 6.0159 | 12.3456 | 10 | | 5.9254 | 12.7065 | 11 | | 5.6652 | nan | 12 | | 5.8185 | 12.8155 | 13 | | 5.9185 | 12.7047 | 14 | | 5.8418 | 12.7175 | 15 | | 5.9122 | 12.5688 | 16 | | 5.9698 | 12.5251 | 17 | | 5.8286 | 12.7015 | 18 | | 5.8807 | 13.2514 | 19 | | 5.8330 | 12.8541 | 20 | | 5.6456 | 13.4088 | 21 | | 5.7257 | 13.5517 | 22 | | 5.8854 | 12.8775 | 23 | | 5.6770 | 13.6499 | 24 | | 5.6026 | 13.9732 | 25 | | 5.6651 | 13.0827 | 26 | | 5.8888 | 13.1292 | 27 | | 5.8123 | 12.8970 | 28 | | 5.7525 | 13.3724 | 29 | | 5.9020 | 13.5507 | 30 | | 5.8642 | 13.3284 | 31 | | 5.9329 | 13.7350 | 32 | | 5.7728 | 13.3011 | 33 | | 5.8297 | 13.6108 | 34 | | 5.8118 | 13.3331 | 35 | | 5.7382 | 13.7047 | 36 | | 5.8061 | 13.8107 | 37 | | 5.8423 | 13.4207 | 38 | | 5.8442 | 13.6832 | 39 | | 5.7680 | 14.1248 | 40 | | 5.7668 | 13.6626 | 41 | | 5.7826 | 13.6470 | 42 | | 5.7692 | 13.9430 | 43 | | 5.5109 | 14.0924 | 44 | | 5.7394 | 14.0253 | 45 | | 5.8013 | 13.5926 | 46 | | 5.7222 | 13.9732 | 47 | | 5.7023 | 14.0204 | 48 | | 5.8250 | 13.9655 | 49 | | 5.6064 | 14.0406 | 50 | | 5.7319 | 14.1826 | 51 | | 5.6849 | 13.9114 | 52 | | 5.8167 | 13.9917 | 53 | | 5.7573 | 14.1509 | 54 | | 5.6921 | 14.3722 | 55 | | 5.7190 | 14.4919 | 56 | | 5.8501 | 13.6970 | 57 | | 5.7627 | 14.1393 | 58 | | 5.8031 | 14.1246 | 59 | | 5.7207 | 14.3084 | 60 | | 5.7979 | 13.9398 | 61 | | 5.7068 | 14.2865 | 62 | | 5.7547 | 14.2590 | 63 | | 5.8349 | 14.1481 | 64 | | 5.7924 | 14.0461 | 65 | | 5.8127 | 14.1274 | 66 | | 5.7590 | 14.3578 | 67 | | 5.8297 | 14.2429 | 68 | | 5.7822 | 14.2742 | 69 | | 5.7708 | 14.3720 | 70 | | 5.6521 | 14.8640 | 71 | | 5.7253 | 14.4404 | 72 | | 5.8076 | 14.1843 | 73 | | 5.7746 | 14.4657 | 74 | | 5.8592 | 14.2965 | 75 | | 5.6643 | 14.0996 | 76 | | 5.7849 | 14.3531 | 77 | | 5.7418 | 14.4266 | 78 | | 5.7030 | 14.5584 | 79 | | 5.8298 | 14.1390 | 80 | | 5.9061 | 13.9172 | 81 | | 5.6570 | 14.6991 | 82 | | 5.7040 | 14.7839 | 83 | | 5.8064 | 14.2581 | 84 | | 5.6855 | 14.4449 | 85 | | 5.7803 | 14.7469 | 86 | | 5.7495 | 14.4704 | 87 | | 5.7539 | 14.5520 | 88 | | 5.7094 | 14.5332 | 89 | ### Framework versions - Transformers 4.27.0.dev0 - TensorFlow 2.11.0 - Datasets 2.9.0 - Tokenizers 0.13.2
Augustvember/wokka5
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: EdenYav/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Augustvember/your-model-name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1499.18 +/- 73.69 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Ayham/bert_bert_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: fermaat/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Ayham/bert_gpt2_summarization_xsum
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:xsum", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - spacy - token-classification language: - en model-index: - name: en_ner_sender_recipient results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.7942124225 - name: NER Recall type: recall value: 0.7429116388 - name: NER F Score type: f_score value: 0.767705961 --- | Feature | Description | | --- | --- | | **Name** | `en_ner_sender_recipient` | | **Version** | `1.0.0` | | **spaCy** | `>=3.5.0,<3.6.0` | | **Default Pipeline** | `tok2vec`, `ner` | | **Components** | `tok2vec`, `ner` | | **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (2 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `RECIPIENT`, `SENDER` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 76.77 | | `ENTS_P` | 79.42 | | `ENTS_R` | 74.29 | | `TOK2VEC_LOSS` | 172291.20 | | `NER_LOSS` | 173143.05 |
Ayham/bert_roberta_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: gpl-3.0 --- ## Shouhou-Lora 祥凤Lora模型 推荐Weight = 0.4上下
Ayham/distilbert_bert_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- language: - en library_name: transformers tags: - Question Generation - Poll Generation --- jhjjhhjjhjhjhhjjhjh
Ayham/distilbert_distilgpt2_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2023-02-14T09:46:38Z
--- language: - hi license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: Whisper Small Hi - Saiful results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Hi - Saiful This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2908 - eval_wer: 38.0005 - eval_runtime: 1507.0416 - eval_samples_per_second: 1.92 - eval_steps_per_second: 0.24 - epoch: 2.44 - step: 1000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Ayham/xlnet_gpt2_summarization_xsum
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:xsum", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
2023-02-14T10:44:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: token_fine_tunned_flipkart_2_gl9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # token_fine_tunned_flipkart_2_gl9 This model is a fine-tuned version of [vinayak361/token_fine_tunned_flipkart_2_gl7](https://huggingface.co/vinayak361/token_fine_tunned_flipkart_2_gl7) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2452 - Precision: 0.8593 - Recall: 0.8767 - F1: 0.8679 - Accuracy: 0.9105 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 451 | 0.3331 | 0.8070 | 0.8310 | 0.8188 | 0.8774 | | 0.4065 | 2.0 | 902 | 0.2927 | 0.8319 | 0.8526 | 0.8421 | 0.8940 | | 0.3251 | 3.0 | 1353 | 0.2737 | 0.8428 | 0.8633 | 0.8529 | 0.9021 | | 0.2825 | 4.0 | 1804 | 0.2650 | 0.8484 | 0.8651 | 0.8567 | 0.9046 | | 0.2568 | 5.0 | 2255 | 0.2586 | 0.8543 | 0.8749 | 0.8645 | 0.9085 | | 0.2419 | 6.0 | 2706 | 0.2511 | 0.8552 | 0.8754 | 0.8652 | 0.9083 | | 0.2351 | 7.0 | 3157 | 0.2481 | 0.8564 | 0.8746 | 0.8654 | 0.9102 | | 0.2226 | 8.0 | 3608 | 0.2455 | 0.8551 | 0.8746 | 0.8647 | 0.9089 | | 0.222 | 9.0 | 4059 | 0.2458 | 0.8597 | 0.8769 | 0.8682 | 0.9106 | | 0.2207 | 10.0 | 4510 | 0.2452 | 0.8593 | 0.8767 | 0.8679 | 0.9105 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu102 - Datasets 2.2.2 - Tokenizers 0.12.1
Ayham/xlnet_roberta_new_summarization_cnn_dailymail
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-14T10:52:13Z
--- language: - en thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg tags: - question-answering license: apache-2.0 datasets: - squad metrics: - squad --- # DistilBERT with a second step of distillation ## Model description This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation. In this version, the following pre-trained models were used: * Student: `distilbert-base-uncased` * Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1` ## Training data This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows: ```python from datasets import load_dataset squad = load_dataset('squad') ``` ## Training procedure ## Eval results | | Exact Match | F1 | |------------------|-------------|------| | DistilBERT paper | 79.1 | 86.9 | | Ours | 78.4 | 86.5 | The scores were calculated using the `squad` metric from `datasets`. ### BibTeX entry and citation info ```bibtex @misc{sanh2020distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, year={2020}, eprint={1910.01108}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Ayham/xlnet_roberta_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2023-02-14T10:53:08Z
--- license: afl-3.0 language: - en library_name: keras ---
Aymene/opus-mt-en-ro-finetuned-en-to-ro
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-14T11:03:37Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer model-index: - name: lmv2-g-rai-auth-02-14 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lmv2-g-rai-auth-02-14 This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0368 - Dob Key Precision: 0.5057 - Dob Key Recall: 0.5205 - Dob Key F1: 0.5130 - Dob Key Number: 171 - Dob Value Precision: 0.8071 - Dob Value Recall: 0.9191 - Dob Value F1: 0.8595 - Dob Value Number: 173 - Patient Name Key Precision: 0.6923 - Patient Name Key Recall: 0.7219 - Patient Name Key F1: 0.7068 - Patient Name Key Number: 187 - Patient Name Value Precision: 0.9235 - Patient Name Value Recall: 0.9628 - Patient Name Value F1: 0.9427 - Patient Name Value Number: 188 - Provider Name Key Precision: 0.6930 - Provider Name Key Recall: 0.7065 - Provider Name Key F1: 0.6997 - Provider Name Key Number: 460 - Provider Name Value Precision: 0.9353 - Provider Name Value Recall: 0.9476 - Provider Name Value F1: 0.9414 - Provider Name Value Number: 458 - Overall Precision: 0.7796 - Overall Recall: 0.8082 - Overall F1: 0.7936 - Overall Accuracy: 0.9944 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Dob Key Precision | Dob Key Recall | Dob Key F1 | Dob Key Number | Dob Value Precision | Dob Value Recall | Dob Value F1 | Dob Value Number | Patient Name Key Precision | Patient Name Key Recall | Patient Name Key F1 | Patient Name Key Number | Patient Name Value Precision | Patient Name Value Recall | Patient Name Value F1 | Patient Name Value Number | Provider Name Key Precision | Provider Name Key Recall | Provider Name Key F1 | Provider Name Key Number | Provider Name Value Precision | Provider Name Value Recall | Provider Name Value F1 | Provider Name Value Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-------------------:|:----------------:|:------------:|:----------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:-------------------------:|:---------------------------:|:------------------------:|:--------------------:|:------------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:--------------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 1.1221 | 1.0 | 241 | 0.4373 | 0.0 | 0.0 | 0.0 | 171 | 0.0 | 0.0 | 0.0 | 173 | 0.0 | 0.0 | 0.0 | 187 | 0.0 | 0.0 | 0.0 | 188 | 0.0 | 0.0 | 0.0 | 460 | 0.0 | 0.0 | 0.0 | 458 | 0.0 | 0.0 | 0.0 | 0.9696 | | 0.258 | 2.0 | 482 | 0.1408 | 0.0385 | 0.0351 | 0.0367 | 171 | 0.9778 | 0.2543 | 0.4037 | 173 | 0.0385 | 0.0053 | 0.0094 | 187 | 0.1739 | 0.0426 | 0.0684 | 188 | 0.0286 | 0.0043 | 0.0075 | 460 | 0.6628 | 0.7424 | 0.7003 | 458 | 0.4685 | 0.2450 | 0.3217 | 0.9782 | | 0.1066 | 3.0 | 723 | 0.0774 | 0.4011 | 0.4386 | 0.4190 | 171 | 0.8404 | 0.9133 | 0.8753 | 173 | 0.5097 | 0.5615 | 0.5344 | 187 | 0.4804 | 0.7181 | 0.5757 | 188 | 0.5108 | 0.5674 | 0.5376 | 460 | 0.8841 | 0.9323 | 0.9075 | 458 | 0.6255 | 0.7092 | 0.6648 | 0.9920 | | 0.0685 | 4.0 | 964 | 0.0585 | 0.4229 | 0.4327 | 0.4277 | 171 | 0.8495 | 0.9133 | 0.8802 | 173 | 0.5479 | 0.5508 | 0.5493 | 187 | 0.9005 | 0.9628 | 0.9306 | 188 | 0.6362 | 0.6957 | 0.6646 | 460 | 0.9315 | 0.9498 | 0.9405 | 458 | 0.7390 | 0.7764 | 0.7572 | 0.9938 | | 0.0532 | 5.0 | 1205 | 0.0486 | 0.4432 | 0.4561 | 0.4496 | 171 | 0.8634 | 0.9133 | 0.8876 | 173 | 0.6862 | 0.6898 | 0.688 | 187 | 0.905 | 0.9628 | 0.9330 | 188 | 0.7106 | 0.7152 | 0.7129 | 460 | 0.9375 | 0.9498 | 0.9436 | 458 | 0.7826 | 0.8002 | 0.7913 | 0.9943 | | 0.0453 | 6.0 | 1446 | 0.0429 | 0.4277 | 0.4327 | 0.4302 | 171 | 0.8971 | 0.9075 | 0.9023 | 173 | 0.6806 | 0.6952 | 0.6878 | 187 | 0.8835 | 0.9681 | 0.9239 | 188 | 0.7181 | 0.7087 | 0.7133 | 460 | 0.9332 | 0.9454 | 0.9393 | 458 | 0.7829 | 0.7954 | 0.7891 | 0.9943 | | 0.0392 | 7.0 | 1687 | 0.0392 | 0.4432 | 0.4561 | 0.4496 | 171 | 0.8177 | 0.9075 | 0.8603 | 173 | 0.6875 | 0.7059 | 0.6966 | 187 | 0.9333 | 0.9681 | 0.9504 | 188 | 0.7045 | 0.7152 | 0.7098 | 460 | 0.9353 | 0.9476 | 0.9414 | 458 | 0.7782 | 0.8015 | 0.7896 | 0.9944 | | 0.0351 | 8.0 | 1928 | 0.0368 | 0.5057 | 0.5205 | 0.5130 | 171 | 0.8071 | 0.9191 | 0.8595 | 173 | 0.6923 | 0.7219 | 0.7068 | 187 | 0.9235 | 0.9628 | 0.9427 | 188 | 0.6930 | 0.7065 | 0.6997 | 460 | 0.9353 | 0.9476 | 0.9414 | 458 | 0.7796 | 0.8082 | 0.7936 | 0.9944 | | 0.0326 | 9.0 | 2169 | 0.0354 | 0.4375 | 0.4503 | 0.4438 | 171 | 0.8438 | 0.9364 | 0.8877 | 173 | 0.6943 | 0.7166 | 0.7053 | 187 | 0.9235 | 0.9628 | 0.9427 | 188 | 0.7063 | 0.7109 | 0.7086 | 460 | 0.9353 | 0.9476 | 0.9414 | 458 | 0.7809 | 0.8033 | 0.7919 | 0.9944 | | 0.0313 | 10.0 | 2410 | 0.0350 | 0.4886 | 0.5029 | 0.4957 | 171 | 0.8777 | 0.9538 | 0.9141 | 173 | 0.6959 | 0.7219 | 0.7087 | 187 | 0.9188 | 0.9628 | 0.9403 | 188 | 0.6674 | 0.7022 | 0.6843 | 460 | 0.9333 | 0.9476 | 0.9404 | 458 | 0.7770 | 0.8088 | 0.7926 | 0.9944 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.2.2 - Tokenizers 0.13.2
Ayoola/cdial-yoruba-test
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers", "has_space" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
2023-02-14T11:03:57Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: pixelcopter2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 12.60 +/- 8.97 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
Access to model shal44/realisticv13.3 is restricted and you are not in the authorized list. Visit https://huggingface.co/shal44/realisticv13.3 to ask for access.
Ayumi/Jovana
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-14T11:32:49Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
AyushPJ/test-squad-trained-finetuned-squad
[ "pytorch", "tensorboard", "distilbert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "autotrain_compatible" ]
question-answering
{ "architectures": [ "DistilBertForQuestionAnswering" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2023-02-14T11:57:24Z
--- license: mit tags: - generated_from_trainer datasets: - imdb model-index: - name: gpt2-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-imdb This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Azaghast/DistilBERT-SCP-Class-Classification
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
42
2023-04-11T13:01:43Z
--- license: afl-3.0 language: - zh - en - ja --- <style> h2 { margin: 0; } table { border: 1px solid black; table-layout: fixed; } tr:not(:last-child) { border-bottom: 1px solid black; } th, td { vertical-align: top; padding: 10px !important; } td:not(:last-child) { border-right: 1px solid black; } </style> <table> <tr> <th colspan="3" align="center"> <h2>Fashion Magazine/时尚摄影 v1.0</h2> </th> <th align="center"> Base Model: SD 1.5 </th> </tr> <tr> <td> <b>Page</b> </td> <td colspan="3"> <a href="https://civitai.com/models/43093/or-fashion-magazine-style" target="_blank"> https://civitai.com/models/43093/or-fashion-magazine-style </a> </td> </tr> <tr> <td> <b>Direct Link</b> </td> <td colspan="3"> <a href="https://huggingface.co/emmajoanne/loras/resolve/main/FashionMagazineStyle_v1.safetensors" target="_blank"> https://huggingface.co/emmajoanne/loras/resolve/main/FashionMagazineStyle_v1.safetensors </a> </td> </tr> <tr> <td colspan=4> 这是一个普通的LoRA,不需要LoCON插件即可使用。<br> 会在画面中添加类似杂志封面的文字,同时能够对配饰相关的提示词做出响应,画出复杂、夸张的发饰 / 项链 / 耳环等元素。<br> 在prompt中添加下列提示词(🔸必须 | 🔹可选):<br> The prompts(🔸mandatory kinda | 🔹optional):<br> 🔸LoRA引用 / <lora:xxxxx><br> 🔹FashionMagCover<br> 🔹magazine cover<br> 🔹english text, watermark, artist name, signature </td> </tr> <tr> <td align="center"> <img src="https://huggingface.co/emmajoanne/loras/resolve/main/images/FashionMagazineStyle_v1_sample1.jpeg" width="150"> </td> <td align="center"> <img src="https://huggingface.co/emmajoanne/loras/resolve/main/images/FashionMagazineStyle_v1_sample2.jpeg" width="150"> </td> </td> <td align="center"> <img src="https://huggingface.co/emmajoanne/loras/resolve/main/images/FashionMagazineStyle_v1_sample3.jpeg" width="150"> </td> </td> <td align="center"> <img src="https://huggingface.co/emmajoanne/loras/resolve/main/images/FashionMagazineStyle_v1_sample4.jpeg" width="150"> </td> </tr> <tr> <td> <b>Prompt</b>:<br>RAW, (masterpiece, best quality, photorealistic, absurdres, 8k:1.2), nsfw, best lighting, complex pupils, detailed background, night, cooling tower, 1girl, solo, upper body, (close up, face focus:1.2), looking back, cowboy shot, standing, furled brow, shiny skin, eyeliner, gothic eyeshadow, eye glitter, blush, flower, long hair, curly hair, dark blonde hair, highly detailed skin, fishnet bodysuit, hair bow, chocker, see-through, (magazine cover, FashionMagCover, english text, username, watermark, user name, artist name, signature:1.1), &lt;lora:LoRA-FashionMagCover-000008:0.9&gt; </td> <td> <b>Prompt</b>:<br>RAW, (masterpiece, best quality, photorealistic, absurdres, 8k:1.2), best lighting, complex pupils, detailed background, dusk, greenhouse, 1girl, solo, upper body, (close up, face focus:1.2), looking up, cowboy shot, standing, tired, shiny skin, eyeliner, gothic eyeshadow, blush, flower, long hair, curly hair, ash brown hair, highly detailed skin, bareback jumpsuit, jaw clip, chocker, see-through, (magazine cover, japanese text, username, watermark, user name, artist name, signature:1.1), &lt;lora:LoRA-FashionMagCover:1&gt; </td> <td> <b>Prompt</b>:<br>official art, unity 8k wallpaper, ultra detailed, beautiful and aesthetic, masterpiece, best quality, (close up, face focus:1.2), (zentangle, mandala, tangle, entangle), (fractal art:1.3) , 1girl, extremely detailed, dynamic angle, the most beautiful form of chaos, elegant, a brutalist designed, vivid colours, romanticism, by james jean, roby dwi antono, ross tran, francis bacon, michal mraz, adrian ghenie, petra cortright, gerhard richter, takato yamamoto, ashley wood, atmospheric, ecstasy of musical notes, streaming musical notes visible, &lt;lora:Lora_v20:0.8&gt; &lt;lora:FashionMagazineStyle_v10:0.6&gt; </td> <td> <b>Prompt</b>:<br>1girl,(8K, RAW photo, best quality, masterpiece:1.2), (realistic, photo-realistic:1.37), ultra-detailed, ultra high res, ,cowboy shot,front,back light,cowboy shot,looking at viewer, &lt;lora:FashionMagazineStyle_v10:1>SEE DESCRIPTION,extremely detailed magazine cover-style digital painting, (magazine cover-style illustration of a fashionable woman in a vibrant outfit), abs, adding a touch of fantasy to the scene, (the text on the cover should be bold and attention-grabbing, with the title of the magazine and a catchy headline, the overall style should be modern and trendy, with a focus on fashion and fantasy, helvetica-bold), art by John Collier and Albert Aublet and Krenz Cushart and Artem Demura, (photographed on a Canon 5D Mark II with Canon MPE65 lens, 1/125th, f/13, ISO 100), &lt;lora:epiNoiseoffset_v2:1.5>rim lighting, two tone lighting, dimly lit, low key, portrait of a award winning photo of posing in a dark studio, &lt;lora:japanesedolllikenessV1_v15:0.5&gt; </td> </tr> <tr> <td> <b>Negative</b>:<br>badhandv4, deformityv6, bad-picture-chill-75v, easynegative, verybadimagenegative_v1.2-6400, (skin spot, mole, mole under eye, freckles, facial mark:1.4), (low quality, worst quality, blurry:1.2), bad anatomy, bad proportions, missing fingers, extra digit, fewer digits, bad-artist, ribs, aged up, wrinkled skin, wire, character sheet, multiple views, jpeg artifacts, pubic hair, cropped, monochrome, dof, backlighting, sketch, painting, censored, low key, high contrast, low key, high contrast, </td> <td> <b>Negative</b>:<br>badhandv4, deformityv6, bad-picture-chill-75v, easynegative, verybadimagenegative_v1.2-6400, (skin spot, mole, mole under eye, freckles, facial mark:1.4), (low quality, worst quality, blurry:1.2), bad anatomy, bad proportions, missing fingers, extra digit, fewer digits, bad-artist, ribs, aged up, wrinkled skin, wire, character sheet, multiple views, jpeg artifacts, pubic hair, cropped, monochrome, dof, backlighting, sketch, painting, censored, low key, high contrast, low key, high contrast, Steps: 36, Sampler: DPM++ 2M Karras v2, CFG scale: 11, Seed: 270379761, Size: 512x768, Model hash: 328a74abc3, Model: 3D_casheartmix_unrealdark, Clip skip: 2, Wildcard prompt: "RAW, (masterpiece, best quality, photorealistic, absurdres, 8k:1.2), best lighting, complex pupils, detailed background, __composition-time__, __composition-location__, 1girl, solo, upper body, (close up, face focus:1.2), __pose-head__, cowboy shot, standing, __face-expression__, shiny skin, eyeliner, gothic eyeshadow, blush, flower, __hairlength__, curly hair, __hair-color__ hair, highly detailed skin, __cloth-sexy__, __cloth-hairaccessory__, chocker, see-through, (magazine cover, japanese text, username, watermark, user name, artist name, signature:1.1), </td> <td> <b>Negative</b>:<br>ng_deepnegative_v1_75t,( english text, watermark, artist name, signature:1.5), (worst quality:2), (low quality:2), (normal quality:2), lowres, bad anatomy, bad hands, normal quality, ((monochrome)), ((grayscale)), badhandv4 </td> <td> <b>Negative</b>:<br>paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, (outdoor:1.6), manboobs, backlight,(ugly:1.331), (duplicate:1.331), (morbid:1.21), (mutilated:1.21), (tranny:1.331), mutated hands, (poorly drawn hands:1.331), blurry, (bad anatomy:1.21), (bad proportions:1.331), extra limbs, (disfigured:1.331), (more than 2 nipples:1.331), (missing arms:1.331), (extra legs:1.331), (fused fingers:1.61051), (too many fingers:1.61051), (unclear eyes:1.331), lowers, bad hands, missing fingers, extra digit, (futa:1.1),bad hands, missing fingers sketch, duplicate, ugly, huge eyes, text, logo, monochrome, worst face, (bad and mutated hands:1.3), (worst quality:2.0), (low quality:2.0), (blurry:2.0), horror, geometry, bad_prompt_v2, (bad hands), (missing fingers), multiple limbs, bad anatomy, (interlocked fingers:1.2), Ugly Fingers, (extra digit and hands and fingers and legs and arms:1.4), crown braid, (deformed fingers:1.2), (long fingers:1.2),(bad-artist-anime), bad-artist, bad hand, EasyNegative, Unspeakable-Horrors-Composition-4v, verybadimagenegative_v1.3, </td> </tr> </table> <table> <tr> <th colspan="3" align="center"> <h2>hanfu/汉服 v3.0</h2> </th> <th align="center"> Base Model: SD 1.5 </th> </tr> <tr> <td> <b>Page</b> </td> <td colspan="3"> <a href="https://civitai.com/models/15365/hanfu" target="_blank"> https://civitai.com/models/15365/hanfu </a> </td> </tr> <tr> <td> <b>Direct Link</b> </td> <td colspan="3"> <a href="https://huggingface.co/emmajoanne/loras/resolve/main/hanfu_v3.safetensors" target="_blank"> https://huggingface.co/emmajoanne/loras/resolve/main/hanfu_v3.safetensors </a> </td> </tr> <tr> <td colspan=4> <a href="https://sleepy-oyster-204.notion.site/hanfu-e7a2aab3ea58451bbe2400bb08955bb6" target="_blank">hanfu汉服总文档 hanfu ALL Document</a> 高质量汉服lora模型,感受汉服之美。请不要犹豫,立即下载试用吧!<br> <a href="https://civitai.com/models/44395" target="_blank">Tang Style(汉服唐风)</a><br> <a href="https://civitai.com/models/47916" target="_blank">Song Style(汉服宋风)</a><br> <a href="https://civitai.com/models/15365?modelVersionId=30796" target="_blank">Ming Style(明风拆分中)</a><br> v3.0 - v1.0 版本支持多风格:汉风、唐风、宋风、明风、晋风<br> v3.0 tags 触发词<br> <ul> <li>明风汉服: hanfu, ming style</li> <li>宋风汉服: hanfu, song style</li> <li>唐风汉服: hanfu, tang style</li> <li>晋风汉服: hanfu, jin style</li> <li>汉风汉服: hanfu, han style</li> </ul> </td> </tr> <tr> <td align="center"> <img src="https://huggingface.co/emmajoanne/loras/resolve/main/images/hanfu_v3_sample1.jpeg" width="150"> </td> <td align="center"> <img src="https://huggingface.co/emmajoanne/loras/resolve/main/images/hanfu_v3_sample2.jpeg" width="150"> </td> </td> <td align="center"> <img src="https://huggingface.co/emmajoanne/loras/resolve/main/images/hanfu_v3_sample3.png" width="150"> </td> </td> <td align="center"> <img src="https://huggingface.co/emmajoanne/loras/resolve/main/images/hanfu_v3_sample4.jpeg" width="150"> </td> </tr> <tr> <td> <b>Prompt</b>:<br>(8k, best quality, masterpiece:1.2), (realistic, photo-realistic:1.2)1girl,perfect face, perfect eyes,pureerosface_v1, red hanfu,tang style,(full body:1.2),&lt;lora:hanfu_v30:0.6&gt; </td> <td> <b>Prompt</b>:<br>(8k, best quality, masterpiece:1.2), (realistic, photo-realistic:1.2)1girl,perfect face, perfect eyes,perfect hands,pureerosface_v1, hanfu, ming style,&lt;lora:hanfu_v30:0.6&gt; </td> <td> <b>Prompt</b>:<br>(masterpiece, top quality, best quality, official art, beautiful and aesthetic:1.2), (1girl:1.3),dynamic pose,extreme detailed(fractal art:1.2),colorful,highest detailed,(zentangle:1.2), (dynamic pose), (abstract background:1.5), (treditional dress:1.2), (shiny skin),floating hair,(many colors:1.4), upper body, hanfu,jin style, (((multicolored background))),(many colors:1.4), &lt;lora:hanfu_v30:0.6&gt; </td> <td> <b>Prompt</b>:<br>(masterpiece, top quality, best quality, official art, beautiful and aesthetic:1.2), extremely detailed,(fractal art:1.2),colorful,highest detailed,(zentangle:1.2), (dynamic pose), (abstract background:1.5), (many colors:1.4), 1 girl, (MF-SD15-V1:0.8), black hair, hanfu, ming style, &lt;lora:hanfu_v30:0.5&gt;, cherry blossom season, </td> </tr> <tr> <td> <b>Negative</b>:<br>(EasyNegative:1.2),(Bad_Prompt_v2:0.8),(Bad_Hands_5),sketch by Bad_Artist, (worst quality, low quality:1.4), (bad anatomy), watermark, signature, text, logo,contact, (extra limbs),Six fingers,Low quality fingers,monochrome,(((missing arms))),(((missing legs))), (((extra arms))),(((extra legs))),less fingers,lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, (depth of field, bokeh, blurry:1.4),blurry background,bandages, </td> <td> <b>Negative</b>:<br>(EasyNegative:1.2),(Bad_Prompt_v2:0.8),(Bad_Hands_5:1.4),sketch by Bad_Artist, (worst quality, low quality:1.2), (bad anatomy), watermark, signature, text, logo,contact, (extra limbs),Six fingers,Low quality fingers,monochrome,(((missing arms))),(((missing legs))), (((extra arms))),(((extra legs))),less fingers,lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, (depth of field, bokeh, blurry:1.4),blurry background,bandages, </td> <td> <b>Negative</b>:<br>(worst quality, low quality:2),easynegative,badhandv4 </td> <td> <b>Negative</b>:<br>easynegative, (nipples:1.2), watermark, text, black and white photos, (worst quality:1.5), (low quality:1.5), (normal quality:1.5), low res, bad anatomy, bad hands, normal quality, ((monochrome)), ((grayscale)), </td> </tr> </table> <table> <tr> <th colspan="3" align="center"> <h2>NijiExpress v2.0</h2> </th> <th align="center"> Base Model: SD 1.5 </th> </tr> <tr> <td> <b>Page</b> </td> <td colspan="3"> <a href="https://civitai.com/models/47909/nijiexpressv2" target="_blank"> https://civitai.com/models/47909/nijiexpressv2 </a> </td> </tr> <tr> <td> <b>Direct Link</b> </td> <td colspan="3"> <a href="https://huggingface.co/emmajoanne/loras/resolve/main/NijiExpress_v2.safetensors" target="_blank"> https://huggingface.co/emmajoanne/loras/resolve/main/NijiExpress_v2.safetensors </a> </td> </tr> <tr> <td colspan=4> 本次升级泛用性,能在更多模型和采样方式上适用<br> 选取500+优质且风格统一的Nijijourney图片,进行更高精度训练<br> 推荐关键词:letterboxed, illustration,<br> 目前仍然存在手部和肢体的bug(甚至更严重)<br> 十分推荐使用“bad promt,easynegative,badhand”等负面embedding </td> </tr> <tr> <td align="center"> <img src="https://huggingface.co/emmajoanne/loras/resolve/main/images/NijiExpress_v2_sample1.jpeg" width="150"> </td> <td align="center"> <img src="https://huggingface.co/emmajoanne/loras/resolve/main/images/NijiExpress_v2_sample2.jpeg" width="150"> </td> </td> <td align="center"> <img src="https://huggingface.co/emmajoanne/loras/resolve/main/images/NijiExpress_v2_sample3.jpeg" width="150"> </td> </td> <td align="center"> <img src="https://huggingface.co/emmajoanne/loras/resolve/main/images/NijiExpress_v2_sample4.jpeg" width="150"> </td> </tr> <tr> <td> <b>Prompt</b>:<br>mushrooms forest, 1girl, chibi illustration.media, &lt;lora:NijiExpressV2:0.9&gt; , masterpiece, best quality, </td> <td> <b>Prompt</b>:<br>(((ganyu \(genshin impact\), ))),pale blue hair, ,dark background, ,ganyu,holding a blue glowing ball, </td> <td> <b>Prompt</b>:<br>solo, closed eyes, long hair, profile,(liquid hair:1.5), from side, upper body, closed mouth, gradient, gradient background, multicolored hair, simple background, grey background, lips, dripping, eyelashes, medium breasts, liquid &lt;lora:nijiexpressV2_v20:0.8&gt; &lt;lora:samdoesartsSamYang_offset:0.9&gt; </td> <td> <b>Prompt</b>:<br>solo, closed eyes, long hair, profile,(liquid hair:1.5), from side, upper body, closed mouth, gradient, gradient background, multicolored hair, simple background, grey background, lips, dripping, eyelashes, medium breasts, liquid &lt;lora:nijiexpressV2_v20:0.8&gt; &lt;lora:samdoesartsSamYang_offset:0.9&gt; </td> </tr> <tr> <td> <b>Negative</b>:<br>badhandv4 verybadimagenegative_v1.3 easynegative, </td> <td> <b>Negative</b>:<br>,badhandv4 easynegative verybadimagenegative_v1.3 </td> <td> <b>Negative</b>:<br> </td> <td> <b>Negative</b>:<br> </td> </tr> </table> **IU** v3.5 Civitai link: _https://civitai.com/models/11722/iu_ Download link: _https://huggingface.co/emmajoanne/loras/resolve/main/iu_v35.safetensors_ <img src="https://huggingface.co/emmajoanne/loras/resolve/main/images/iu_v35_sample1.jpeg" width="250"> prompt _nikon RAW photo,8 k,Fujifilm XT3,masterpiece, best quality, 1girl,solo,realistic, photorealistic,ultra detailed, diamond stud earrings, long straight black hair, hazel eyes, serious expression, slender figure, wearing a black blazer and white blouse, standing against a city skyline at night iu1, \<lora:iu_v35:1\>\<lora:lightAndShadow_v10:0.7\>_ negative _(worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans,extra fingers,fewer fingers,strange fingers,bad hand,_ **blindbox/大概是盲盒** Civital link: _https://civitai.com/models/25995/blindbox_ Download link: _https://huggingface.co/emmajoanne/loras/resolve/main/blindbox_v1_mix.safetensors_ <img src="https://huggingface.co/emmajoanne/loras/resolve/main/images/blindbox_v1_mix_sample1.jpeg" width="250"> prompt _(masterpiece),(best quality),(ultra-detailed), (full body:1.2), 1girl,chibi,cute, smile, white Bob haircut, red eyes, earring, white shirt,black skirt, lace legwear, (sitting on red sofa), seductive posture, smile, A sleek black coffee table sits in front of the sofa and a few decorative items are placed on the shelves, (beautiful detailed face), (beautiful detailed eyes), \<lora:blindbox_v1_mix:1\>,_ negative _(low quality:1.3), (worst quality:1.3)_ **Moxin/墨心** v1.0 Civital link: _https://civitai.com/models/12597?modelVersionId=14856_ Download link: _https://huggingface.co/emmajoanne/loras/resolve/main/Moxin_10.safetensors_ <img src="https://huggingface.co/emmajoanne/loras/resolve/main/images/Moxin_v10_sample1.jpeg" width="250"> prompt _waterfall, cliff, dragon, bird, cloud, thousand miles, valley \<lora:MoXin:0.4\>_ negative _fat, paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, ((monochrome)), ((grayscale)), bad anatomy, DeepNegative, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, bad feet, cropped, poorly drawn hands, poorly drawn face, mutation, deformed, extra fingers, extra limbs, extra arms, extra legs, malformed limbs, fused fingers, too many fingers, long neck, cross-eyed, mutated hands, polar lowres, bad body, bad proportions, gross proportions, text, error, missing fingers, missing arms, missing legs, loli, child, bare shoulders, nsfw, naked, nude, human, boy, girl, man, woman, Dents, Creases, Wrinkles, paper artifects, fold marks_ **Shukezouma/疏可走马** v1.1 Civital link: _https://civitai.com/models/12597?modelVersionId=20143_ Download link: _https://huggingface.co/emmajoanne/loras/resolve/main/shukezouma_v1_1.safetensors_ <img src="https://huggingface.co/emmajoanne/loras/resolve/main/images/Shukezouma_v1_1_sample1.jpeg" width="250"> prompt _shukezouma, negative space, , shuimobysim , \<lora:shuV2:0.8\>, portrait of a woman standing , willow branches, (masterpiece, best quality:1.2), traditional chinese ink painting, \<lora:shuimobysimV3:0.7\>, modelshoot style, peaceful, (smile), looking at viewer, wearing long hanfu, hanfu, song, willow tree in background, wuchangshuo,_ negative _(worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, skin spots, acnes, skin blemishes, age spot, glans, (watermark:2),_ **Colorwater/沁彩** v4.0 Civital link: _https://civitai.com/models/16055/colorwater_ Download link: _https://huggingface.co/emmajoanne/loras/resolve/main/Colorwater_v4.safetensors_ <img src="https://huggingface.co/emmajoanne/loras/resolve/main/images/Colorwater_v4_sample1.jpeg" width="250"> prompt _there is ugliness in beauty, but there is also beauty in ugliness. in the style of adrian ghenie, esao andrews, jenny saville, edward hopper, surrealism, dark art by james jean, takato yamamoto, inkpunk minimalism \<lora:Colorwater_v4:0.55\>_ negative _3d, cartoon, anime, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, bad anatomy, girl, loli, young, large breasts, red eyes, muscular_ **Xiaorenshu/小人书·连环画** v2.0 Civital link: _https://civitai.com/models/18323/xiaorenshu_ Download link: _https://huggingface.co/emmajoanne/loras/resolve/main/Xiaorenshu_v2.safetensors_ <img src="https://huggingface.co/emmajoanne/loras/resolve/main/images/Xiaorenshu_v2_sample1.jpeg" width="250"> prompt _masterpiece, best quality, ultra-detailed, illustration, 1girl, solo, selecting, choosing, vending machine, soda, beverage, drink, cold, refreshing, thirsty, convenience, modern, technology, digital display, buttons, coins, banknotes, change, options, variety, decision, dilemma, casual clothes, summer outfit, t-shirt, denim shorts, sneakers, backpack, shoulder bag, brown hair, shoulder length, straight hair, natural highlights, side part, green eyes, round glasses, cute, charming, curious, interested, thoughtful, focused, inquisitive, decisive, confident, realistic style, detailed shading, lighting, texture, reflection, urban environment, street view, cityscape, blurred background, color contrast, complementary colors, digital art, leisure time, modern life, daily routine, snack, junk food, refreshment, consumerism, commercialism, consumer culture, consumer choice, options, diversity, convenience, youth culture, teenage life, adolescent\<lora:小人书:1\>_ negative _EasyNegative,sketch, duplicate, ugly, huge eyes, text, logo, monochrome, worst face, (bad and mutated hands:1.3), (worst quality:2.0), (low quality:2.0), (blurry:2.0), horror, geometry, bad_prompt, (bad hands), (missing fingers), multiple limbs, bad anatomy, (interlocked fingers:1.2), Ugly Fingers, (extra digit and hands and fingers and legs and arms:1.4), crown braid, ((2girl)), (deformed fingers:1.2), (long fingers:1.2), (bad-artist-anime), bad-artist, bad hand,, ,bad_prompt_version2_
Azaghast/GPT2-SCP-Descriptions
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2023-02-14T17:38:51Z
--- license: apache-2.0 tags: - masked-auto-encoding - generated_from_trainer datasets: - imagefolder model-index: - name: mae-vit-base-patch32-224-ct results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mae-vit-base-patch32-224-ct This model is a fine-tuned version of [google/vit-base-patch32-224-in21k](https://huggingface.co/google/vit-base-patch32-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3803 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00015 - train_batch_size: 256 - eval_batch_size: 256 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 1200.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | 1.1257 | 1.0 | 51 | 1.1119 | | 1.0507 | 2.0 | 102 | 1.0434 | | 1.0046 | 3.0 | 153 | 0.9988 | | 0.9761 | 4.0 | 204 | 0.9725 | | 0.9572 | 5.0 | 255 | 0.9529 | | 0.9357 | 6.0 | 306 | 0.9304 | | 0.9128 | 7.0 | 357 | 0.9100 | | 0.9037 | 8.0 | 408 | 0.9004 | | 0.8984 | 9.0 | 459 | 0.8941 | | 0.8904 | 10.0 | 510 | 0.8896 | | 0.8846 | 11.0 | 561 | 0.8802 | | 0.8748 | 12.0 | 612 | 0.8775 | | 0.8692 | 13.0 | 663 | 0.8685 | | 0.8656 | 14.0 | 714 | 0.8665 | | 0.8634 | 15.0 | 765 | 0.8607 | | 0.8565 | 16.0 | 816 | 0.8561 | | 0.8555 | 17.0 | 867 | 0.8548 | | 0.8521 | 18.0 | 918 | 0.8464 | | 0.8478 | 19.0 | 969 | 0.8449 | | 0.847 | 20.0 | 1020 | 0.8455 | | 0.842 | 21.0 | 1071 | 0.8378 | | 0.8385 | 22.0 | 1122 | 0.8358 | | 0.8319 | 23.0 | 1173 | 0.8332 | | 0.8267 | 24.0 | 1224 | 0.8347 | | 0.8266 | 25.0 | 1275 | 0.8247 | | 0.8242 | 26.0 | 1326 | 0.8242 | | 0.8215 | 27.0 | 1377 | 0.8192 | | 0.8171 | 28.0 | 1428 | 0.8213 | | 0.8176 | 29.0 | 1479 | 0.8160 | | 0.8122 | 30.0 | 1530 | 0.8128 | | 0.8107 | 31.0 | 1581 | 0.8036 | | 0.8069 | 32.0 | 1632 | 0.8069 | | 0.8081 | 33.0 | 1683 | 0.8023 | | 0.8043 | 34.0 | 1734 | 0.8048 | | 0.8071 | 35.0 | 1785 | 0.8082 | | 0.8017 | 36.0 | 1836 | 0.7971 | | 0.7965 | 37.0 | 1887 | 0.7953 | | 0.7953 | 38.0 | 1938 | 0.8112 | | 0.7979 | 39.0 | 1989 | 0.7955 | | 0.7887 | 40.0 | 2040 | 0.7966 | | 0.7866 | 41.0 | 2091 | 0.7879 | | 0.7862 | 42.0 | 2142 | 0.7828 | | 0.7836 | 43.0 | 2193 | 0.7865 | | 0.7851 | 44.0 | 2244 | 0.7830 | | 0.7813 | 45.0 | 2295 | 0.7840 | | 0.78 | 46.0 | 2346 | 0.7749 | | 0.779 | 47.0 | 2397 | 0.7825 | | 0.7762 | 48.0 | 2448 | 0.7712 | | 0.7676 | 49.0 | 2499 | 0.7675 | | 0.7638 | 50.0 | 2550 | 0.7645 | | 0.7826 | 51.0 | 2601 | 0.7879 | | 0.7728 | 52.0 | 2652 | 0.7730 | | 0.7629 | 53.0 | 2703 | 0.7606 | | 0.7819 | 54.0 | 2754 | 0.7718 | | 0.7802 | 55.0 | 2805 | 0.7809 | | 0.7632 | 56.0 | 2856 | 0.7577 | | 0.7567 | 57.0 | 2907 | 0.7654 | | 0.7564 | 58.0 | 2958 | 0.7574 | | 0.7535 | 59.0 | 3009 | 0.7555 | | 0.75 | 60.0 | 3060 | 0.7484 | | 0.7512 | 61.0 | 3111 | 0.7487 | | 0.7493 | 62.0 | 3162 | 0.7462 | | 0.742 | 63.0 | 3213 | 0.7450 | | 0.7469 | 64.0 | 3264 | 0.7464 | | 0.7449 | 65.0 | 3315 | 0.7393 | | 0.7321 | 66.0 | 3366 | 0.7425 | | 0.7411 | 67.0 | 3417 | 0.7391 | | 0.7394 | 68.0 | 3468 | 0.7413 | | 0.7301 | 69.0 | 3519 | 0.7344 | | 0.7208 | 70.0 | 3570 | 0.7256 | | 0.7211 | 71.0 | 3621 | 0.7225 | | 0.7273 | 72.0 | 3672 | 0.7264 | | 0.7267 | 73.0 | 3723 | 0.7221 | | 0.7222 | 74.0 | 3774 | 0.7256 | | 0.7175 | 75.0 | 3825 | 0.7202 | | 0.7174 | 76.0 | 3876 | 0.7149 | | 0.7143 | 77.0 | 3927 | 0.7127 | | 0.7106 | 78.0 | 3978 | 0.7061 | | 0.7188 | 79.0 | 4029 | 0.7153 | | 0.7103 | 80.0 | 4080 | 0.7086 | | 0.7055 | 81.0 | 4131 | 0.7098 | | 0.7026 | 82.0 | 4182 | 0.7075 | | 0.7191 | 83.0 | 4233 | 0.7127 | | 0.7027 | 84.0 | 4284 | 0.7172 | | 0.6981 | 85.0 | 4335 | 0.7070 | | 0.7064 | 86.0 | 4386 | 0.7029 | | 0.6943 | 87.0 | 4437 | 0.7046 | | 0.7025 | 88.0 | 4488 | 0.7036 | | 0.6959 | 89.0 | 4539 | 0.7094 | | 0.6988 | 90.0 | 4590 | 0.6917 | | 0.6912 | 91.0 | 4641 | 0.6926 | | 0.689 | 92.0 | 4692 | 0.6881 | | 0.687 | 93.0 | 4743 | 0.6866 | | 0.6867 | 94.0 | 4794 | 0.6873 | | 0.6832 | 95.0 | 4845 | 0.6820 | | 0.6863 | 96.0 | 4896 | 0.6809 | | 0.6908 | 97.0 | 4947 | 0.6792 | | 0.6891 | 98.0 | 4998 | 0.6796 | | 0.6803 | 99.0 | 5049 | 0.6793 | | 0.6755 | 100.0 | 5100 | 0.6738 | | 0.6735 | 101.0 | 5151 | 0.6750 | | 0.6727 | 102.0 | 5202 | 0.6729 | | 0.6695 | 103.0 | 5253 | 0.6734 | | 0.6678 | 104.0 | 5304 | 0.6702 | | 0.671 | 105.0 | 5355 | 0.6720 | | 0.6654 | 106.0 | 5406 | 0.6686 | | 0.669 | 107.0 | 5457 | 0.6683 | | 0.6628 | 108.0 | 5508 | 0.6639 | | 0.6655 | 109.0 | 5559 | 0.6663 | | 0.6637 | 110.0 | 5610 | 0.6651 | | 0.6643 | 111.0 | 5661 | 0.6639 | | 0.6607 | 112.0 | 5712 | 0.6561 | | 0.6598 | 113.0 | 5763 | 0.6591 | | 0.6589 | 114.0 | 5814 | 0.6610 | | 0.6566 | 115.0 | 5865 | 0.6566 | | 0.6706 | 116.0 | 5916 | 0.6749 | | 0.6688 | 117.0 | 5967 | 0.6670 | | 0.6657 | 118.0 | 6018 | 0.6599 | | 0.6611 | 119.0 | 6069 | 0.6567 | | 0.6528 | 120.0 | 6120 | 0.6591 | | 0.652 | 121.0 | 6171 | 0.6566 | | 0.6488 | 122.0 | 6222 | 0.6528 | | 0.6538 | 123.0 | 6273 | 0.6558 | | 0.6457 | 124.0 | 6324 | 0.6509 | | 0.643 | 125.0 | 6375 | 0.6462 | | 0.6433 | 126.0 | 6426 | 0.6459 | | 0.6451 | 127.0 | 6477 | 0.6454 | | 0.6413 | 128.0 | 6528 | 0.6441 | | 0.6407 | 129.0 | 6579 | 0.6409 | | 0.6381 | 130.0 | 6630 | 0.6422 | | 0.6408 | 131.0 | 6681 | 0.6432 | | 0.6404 | 132.0 | 6732 | 0.6408 | | 0.6412 | 133.0 | 6783 | 0.6354 | | 0.6348 | 134.0 | 6834 | 0.6350 | | 0.6307 | 135.0 | 6885 | 0.6389 | | 0.639 | 136.0 | 6936 | 0.6417 | | 0.6319 | 137.0 | 6987 | 0.6353 | | 0.6306 | 138.0 | 7038 | 0.6385 | | 0.6307 | 139.0 | 7089 | 0.6412 | | 0.6343 | 140.0 | 7140 | 0.6308 | | 0.6289 | 141.0 | 7191 | 0.6337 | | 0.6298 | 142.0 | 7242 | 0.6342 | | 0.6284 | 143.0 | 7293 | 0.6287 | | 0.624 | 144.0 | 7344 | 0.6305 | | 0.6266 | 145.0 | 7395 | 0.6338 | | 0.6253 | 146.0 | 7446 | 0.6281 | | 0.6204 | 147.0 | 7497 | 0.6241 | | 0.6232 | 148.0 | 7548 | 0.6222 | | 0.6213 | 149.0 | 7599 | 0.6201 | | 0.6225 | 150.0 | 7650 | 0.6237 | | 0.6228 | 151.0 | 7701 | 0.6193 | | 0.6191 | 152.0 | 7752 | 0.6200 | | 0.6198 | 153.0 | 7803 | 0.6229 | | 0.6183 | 154.0 | 7854 | 0.6213 | | 0.6181 | 155.0 | 7905 | 0.6213 | | 0.6168 | 156.0 | 7956 | 0.6164 | | 0.6156 | 157.0 | 8007 | 0.6160 | | 0.6125 | 158.0 | 8058 | 0.6153 | | 0.6126 | 159.0 | 8109 | 0.6151 | | 0.6115 | 160.0 | 8160 | 0.6163 | | 0.611 | 161.0 | 8211 | 0.6167 | | 0.6099 | 162.0 | 8262 | 0.6083 | | 0.6089 | 163.0 | 8313 | 0.6104 | | 0.6091 | 164.0 | 8364 | 0.6140 | | 0.6105 | 165.0 | 8415 | 0.6122 | | 0.61 | 166.0 | 8466 | 0.6106 | | 0.6104 | 167.0 | 8517 | 0.6062 | | 0.6067 | 168.0 | 8568 | 0.6095 | | 0.6056 | 169.0 | 8619 | 0.6067 | | 0.607 | 170.0 | 8670 | 0.6091 | | 0.6032 | 171.0 | 8721 | 0.6041 | | 0.6038 | 172.0 | 8772 | 0.6104 | | 0.605 | 173.0 | 8823 | 0.6068 | | 0.6036 | 174.0 | 8874 | 0.6005 | | 0.6035 | 175.0 | 8925 | 0.6055 | | 0.6026 | 176.0 | 8976 | 0.6014 | | 0.6012 | 177.0 | 9027 | 0.6029 | | 0.5945 | 178.0 | 9078 | 0.5967 | | 0.6011 | 179.0 | 9129 | 0.5921 | | 0.5929 | 180.0 | 9180 | 0.5991 | | 0.5981 | 181.0 | 9231 | 0.5954 | | 0.6011 | 182.0 | 9282 | 0.6007 | | 0.5977 | 183.0 | 9333 | 0.6013 | | 0.5947 | 184.0 | 9384 | 0.6023 | | 0.59 | 185.0 | 9435 | 0.5968 | | 0.5924 | 186.0 | 9486 | 0.5987 | | 0.5906 | 187.0 | 9537 | 0.5915 | | 0.5928 | 188.0 | 9588 | 0.5877 | | 0.5849 | 189.0 | 9639 | 0.5911 | | 0.5913 | 190.0 | 9690 | 0.5954 | | 0.5863 | 191.0 | 9741 | 0.5906 | | 0.588 | 192.0 | 9792 | 0.5942 | | 0.5906 | 193.0 | 9843 | 0.5924 | | 0.5927 | 194.0 | 9894 | 0.5911 | | 0.5857 | 195.0 | 9945 | 0.5852 | | 0.5859 | 196.0 | 9996 | 0.5910 | | 0.5775 | 197.0 | 10047 | 0.5853 | | 0.586 | 198.0 | 10098 | 0.5877 | | 0.5853 | 199.0 | 10149 | 0.5848 | | 0.5824 | 200.0 | 10200 | 0.5854 | | 0.5797 | 201.0 | 10251 | 0.5834 | | 0.5857 | 202.0 | 10302 | 0.5792 | | 0.5863 | 203.0 | 10353 | 0.5824 | | 0.5826 | 204.0 | 10404 | 0.5838 | | 0.579 | 205.0 | 10455 | 0.5808 | | 0.5758 | 206.0 | 10506 | 0.5810 | | 0.5798 | 207.0 | 10557 | 0.5782 | | 0.576 | 208.0 | 10608 | 0.5818 | | 0.5717 | 209.0 | 10659 | 0.5826 | | 0.5774 | 210.0 | 10710 | 0.5800 | | 0.5724 | 211.0 | 10761 | 0.5813 | | 0.5706 | 212.0 | 10812 | 0.5755 | | 0.5737 | 213.0 | 10863 | 0.5788 | | 0.5791 | 214.0 | 10914 | 0.5769 | | 0.5712 | 215.0 | 10965 | 0.5767 | | 0.567 | 216.0 | 11016 | 0.5790 | | 0.5671 | 217.0 | 11067 | 0.5734 | | 0.5733 | 218.0 | 11118 | 0.5722 | | 0.5673 | 219.0 | 11169 | 0.5806 | | 0.5713 | 220.0 | 11220 | 0.5764 | | 0.5669 | 221.0 | 11271 | 0.5694 | | 0.5669 | 222.0 | 11322 | 0.5749 | | 0.5665 | 223.0 | 11373 | 0.5732 | | 0.5676 | 224.0 | 11424 | 0.5676 | | 0.5621 | 225.0 | 11475 | 0.5677 | | 0.5623 | 226.0 | 11526 | 0.5715 | | 0.5695 | 227.0 | 11577 | 0.5676 | | 0.5657 | 228.0 | 11628 | 0.5667 | | 0.565 | 229.0 | 11679 | 0.5644 | | 0.5617 | 230.0 | 11730 | 0.5650 | | 0.5587 | 231.0 | 11781 | 0.5637 | | 0.5591 | 232.0 | 11832 | 0.5652 | | 0.5607 | 233.0 | 11883 | 0.5648 | | 0.559 | 234.0 | 11934 | 0.5681 | | 0.5601 | 235.0 | 11985 | 0.5637 | | 0.5605 | 236.0 | 12036 | 0.5697 | | 0.5555 | 237.0 | 12087 | 0.5593 | | 0.5602 | 238.0 | 12138 | 0.5683 | | 0.5647 | 239.0 | 12189 | 0.5629 | | 0.5575 | 240.0 | 12240 | 0.5611 | | 0.5577 | 241.0 | 12291 | 0.5588 | | 0.5514 | 242.0 | 12342 | 0.5584 | | 0.5581 | 243.0 | 12393 | 0.5566 | | 0.555 | 244.0 | 12444 | 0.5563 | | 0.5571 | 245.0 | 12495 | 0.5541 | | 0.5549 | 246.0 | 12546 | 0.5541 | | 0.5521 | 247.0 | 12597 | 0.5521 | | 0.55 | 248.0 | 12648 | 0.5567 | | 0.5518 | 249.0 | 12699 | 0.5559 | | 0.5522 | 250.0 | 12750 | 0.5536 | | 0.5481 | 251.0 | 12801 | 0.5504 | | 0.5516 | 252.0 | 12852 | 0.5563 | | 0.5524 | 253.0 | 12903 | 0.5503 | | 0.5582 | 254.0 | 12954 | 0.5519 | | 0.5514 | 255.0 | 13005 | 0.5504 | | 0.5498 | 256.0 | 13056 | 0.5520 | | 0.5481 | 257.0 | 13107 | 0.5540 | | 0.551 | 258.0 | 13158 | 0.5503 | | 0.5495 | 259.0 | 13209 | 0.5491 | | 0.5483 | 260.0 | 13260 | 0.5461 | | 0.5468 | 261.0 | 13311 | 0.5586 | | 0.5454 | 262.0 | 13362 | 0.5495 | | 0.5447 | 263.0 | 13413 | 0.5455 | | 0.5475 | 264.0 | 13464 | 0.5511 | | 0.5439 | 265.0 | 13515 | 0.5453 | | 0.542 | 266.0 | 13566 | 0.5477 | | 0.5437 | 267.0 | 13617 | 0.5502 | | 0.5452 | 268.0 | 13668 | 0.5432 | | 0.5397 | 269.0 | 13719 | 0.5443 | | 0.5424 | 270.0 | 13770 | 0.5410 | | 0.5391 | 271.0 | 13821 | 0.5420 | | 0.5368 | 272.0 | 13872 | 0.5402 | | 0.5387 | 273.0 | 13923 | 0.5401 | | 0.5362 | 274.0 | 13974 | 0.5414 | | 0.5374 | 275.0 | 14025 | 0.5418 | | 0.5375 | 276.0 | 14076 | 0.5415 | | 0.5427 | 277.0 | 14127 | 0.5436 | | 0.5382 | 278.0 | 14178 | 0.5366 | | 0.5341 | 279.0 | 14229 | 0.5411 | | 0.5348 | 280.0 | 14280 | 0.5377 | | 0.5339 | 281.0 | 14331 | 0.5393 | | 0.5359 | 282.0 | 14382 | 0.5359 | | 0.536 | 283.0 | 14433 | 0.5368 | | 0.5362 | 284.0 | 14484 | 0.5384 | | 0.532 | 285.0 | 14535 | 0.5346 | | 0.5298 | 286.0 | 14586 | 0.5376 | | 0.5352 | 287.0 | 14637 | 0.5373 | | 0.5344 | 288.0 | 14688 | 0.5359 | | 0.5399 | 289.0 | 14739 | 0.5427 | | 0.5329 | 290.0 | 14790 | 0.5349 | | 0.531 | 291.0 | 14841 | 0.5321 | | 0.5317 | 292.0 | 14892 | 0.5361 | | 0.5303 | 293.0 | 14943 | 0.5296 | | 0.5291 | 294.0 | 14994 | 0.5312 | | 0.5335 | 295.0 | 15045 | 0.5244 | | 0.5309 | 296.0 | 15096 | 0.5252 | | 0.5251 | 297.0 | 15147 | 0.5310 | | 0.5266 | 298.0 | 15198 | 0.5301 | | 0.5279 | 299.0 | 15249 | 0.5308 | | 0.5261 | 300.0 | 15300 | 0.5250 | | 0.5214 | 301.0 | 15351 | 0.5252 | | 0.5269 | 302.0 | 15402 | 0.5306 | | 0.5229 | 303.0 | 15453 | 0.5264 | | 0.5234 | 304.0 | 15504 | 0.5263 | | 0.5271 | 305.0 | 15555 | 0.5280 | | 0.525 | 306.0 | 15606 | 0.5233 | | 0.5216 | 307.0 | 15657 | 0.5211 | | 0.5247 | 308.0 | 15708 | 0.5246 | | 0.5203 | 309.0 | 15759 | 0.5279 | | 0.5201 | 310.0 | 15810 | 0.5246 | | 0.5254 | 311.0 | 15861 | 0.5306 | | 0.5166 | 312.0 | 15912 | 0.5224 | | 0.525 | 313.0 | 15963 | 0.5192 | | 0.5224 | 314.0 | 16014 | 0.5247 | | 0.5195 | 315.0 | 16065 | 0.5230 | | 0.5189 | 316.0 | 16116 | 0.5239 | | 0.5226 | 317.0 | 16167 | 0.5180 | | 0.5166 | 318.0 | 16218 | 0.5197 | | 0.5159 | 319.0 | 16269 | 0.5156 | | 0.5156 | 320.0 | 16320 | 0.5204 | | 0.5179 | 321.0 | 16371 | 0.5215 | | 0.5194 | 322.0 | 16422 | 0.5211 | | 0.519 | 323.0 | 16473 | 0.5212 | | 0.5112 | 324.0 | 16524 | 0.5175 | | 0.5163 | 325.0 | 16575 | 0.5225 | | 0.5165 | 326.0 | 16626 | 0.5172 | | 0.5104 | 327.0 | 16677 | 0.5200 | | 0.51 | 328.0 | 16728 | 0.5156 | | 0.5129 | 329.0 | 16779 | 0.5160 | | 0.5084 | 330.0 | 16830 | 0.5207 | | 0.5159 | 331.0 | 16881 | 0.5147 | | 0.5126 | 332.0 | 16932 | 0.5159 | | 0.5132 | 333.0 | 16983 | 0.5156 | | 0.5092 | 334.0 | 17034 | 0.5151 | | 0.5116 | 335.0 | 17085 | 0.5147 | | 0.5113 | 336.0 | 17136 | 0.5121 | | 0.5076 | 337.0 | 17187 | 0.5101 | | 0.5106 | 338.0 | 17238 | 0.5111 | | 0.5117 | 339.0 | 17289 | 0.5094 | | 0.5086 | 340.0 | 17340 | 0.5132 | | 0.5034 | 341.0 | 17391 | 0.5162 | | 0.5061 | 342.0 | 17442 | 0.5142 | | 0.5101 | 343.0 | 17493 | 0.5136 | | 0.5042 | 344.0 | 17544 | 0.5135 | | 0.5091 | 345.0 | 17595 | 0.5083 | | 0.5095 | 346.0 | 17646 | 0.5112 | | 0.5058 | 347.0 | 17697 | 0.5121 | | 0.504 | 348.0 | 17748 | 0.5082 | | 0.5016 | 349.0 | 17799 | 0.5075 | | 0.5042 | 350.0 | 17850 | 0.5090 | | 0.5036 | 351.0 | 17901 | 0.5089 | | 0.5045 | 352.0 | 17952 | 0.5095 | | 0.5067 | 353.0 | 18003 | 0.5087 | | 0.5026 | 354.0 | 18054 | 0.5064 | | 0.5001 | 355.0 | 18105 | 0.5055 | | 0.5036 | 356.0 | 18156 | 0.5057 | | 0.5012 | 357.0 | 18207 | 0.5083 | | 0.5031 | 358.0 | 18258 | 0.5110 | | 0.5021 | 359.0 | 18309 | 0.5128 | | 0.4973 | 360.0 | 18360 | 0.5014 | | 0.4988 | 361.0 | 18411 | 0.5028 | | 0.5013 | 362.0 | 18462 | 0.5035 | | 0.5001 | 363.0 | 18513 | 0.5040 | | 0.4972 | 364.0 | 18564 | 0.5056 | | 0.4994 | 365.0 | 18615 | 0.5070 | | 0.5005 | 366.0 | 18666 | 0.5070 | | 0.4993 | 367.0 | 18717 | 0.5053 | | 0.4975 | 368.0 | 18768 | 0.5036 | | 0.4967 | 369.0 | 18819 | 0.5026 | | 0.4968 | 370.0 | 18870 | 0.5011 | | 0.498 | 371.0 | 18921 | 0.4990 | | 0.5022 | 372.0 | 18972 | 0.5032 | | 0.4959 | 373.0 | 19023 | 0.4972 | | 0.4921 | 374.0 | 19074 | 0.4967 | | 0.4936 | 375.0 | 19125 | 0.4967 | | 0.496 | 376.0 | 19176 | 0.5000 | | 0.4941 | 377.0 | 19227 | 0.4980 | | 0.4937 | 378.0 | 19278 | 0.4975 | | 0.4979 | 379.0 | 19329 | 0.4975 | | 0.4996 | 380.0 | 19380 | 0.4932 | | 0.4961 | 381.0 | 19431 | 0.4983 | | 0.4903 | 382.0 | 19482 | 0.4974 | | 0.4899 | 383.0 | 19533 | 0.4953 | | 0.4924 | 384.0 | 19584 | 0.4953 | | 0.4895 | 385.0 | 19635 | 0.4964 | | 0.4965 | 386.0 | 19686 | 0.5006 | | 0.4896 | 387.0 | 19737 | 0.4938 | | 0.497 | 388.0 | 19788 | 0.4956 | | 0.4924 | 389.0 | 19839 | 0.4960 | | 0.4904 | 390.0 | 19890 | 0.4972 | | 0.5 | 391.0 | 19941 | 0.4958 | | 0.4961 | 392.0 | 19992 | 0.4906 | | 0.491 | 393.0 | 20043 | 0.4918 | | 0.4878 | 394.0 | 20094 | 0.4954 | | 0.4881 | 395.0 | 20145 | 0.4916 | | 0.49 | 396.0 | 20196 | 0.4946 | | 0.4881 | 397.0 | 20247 | 0.4924 | | 0.4871 | 398.0 | 20298 | 0.4959 | | 0.492 | 399.0 | 20349 | 0.4867 | | 0.4883 | 400.0 | 20400 | 0.4891 | | 0.4864 | 401.0 | 20451 | 0.4946 | | 0.4898 | 402.0 | 20502 | 0.4922 | | 0.4841 | 403.0 | 20553 | 0.4902 | | 0.4879 | 404.0 | 20604 | 0.4921 | | 0.4801 | 405.0 | 20655 | 0.4914 | | 0.4877 | 406.0 | 20706 | 0.4882 | | 0.4858 | 407.0 | 20757 | 0.4882 | | 0.4856 | 408.0 | 20808 | 0.4872 | | 0.4825 | 409.0 | 20859 | 0.4871 | | 0.4865 | 410.0 | 20910 | 0.4853 | | 0.4834 | 411.0 | 20961 | 0.4908 | | 0.4815 | 412.0 | 21012 | 0.4847 | | 0.4828 | 413.0 | 21063 | 0.4919 | | 0.487 | 414.0 | 21114 | 0.4899 | | 0.4842 | 415.0 | 21165 | 0.4876 | | 0.4902 | 416.0 | 21216 | 0.4873 | | 0.4809 | 417.0 | 21267 | 0.4913 | | 0.4825 | 418.0 | 21318 | 0.4832 | | 0.4797 | 419.0 | 21369 | 0.4872 | | 0.4852 | 420.0 | 21420 | 0.4868 | | 0.4879 | 421.0 | 21471 | 0.4833 | | 0.4823 | 422.0 | 21522 | 0.4824 | | 0.4729 | 423.0 | 21573 | 0.4793 | | 0.4825 | 424.0 | 21624 | 0.4812 | | 0.4739 | 425.0 | 21675 | 0.4831 | | 0.4767 | 426.0 | 21726 | 0.4848 | | 0.4806 | 427.0 | 21777 | 0.4858 | | 0.4736 | 428.0 | 21828 | 0.4831 | | 0.4857 | 429.0 | 21879 | 0.4785 | | 0.4819 | 430.0 | 21930 | 0.4805 | | 0.4767 | 431.0 | 21981 | 0.4845 | | 0.4765 | 432.0 | 22032 | 0.4803 | | 0.4785 | 433.0 | 22083 | 0.4826 | | 0.4758 | 434.0 | 22134 | 0.4814 | | 0.4677 | 435.0 | 22185 | 0.4815 | | 0.4735 | 436.0 | 22236 | 0.4811 | | 0.4764 | 437.0 | 22287 | 0.4749 | | 0.4743 | 438.0 | 22338 | 0.4846 | | 0.4736 | 439.0 | 22389 | 0.4825 | | 0.4732 | 440.0 | 22440 | 0.4783 | | 0.4706 | 441.0 | 22491 | 0.4810 | | 0.4735 | 442.0 | 22542 | 0.4780 | | 0.4796 | 443.0 | 22593 | 0.4881 | | 0.4724 | 444.0 | 22644 | 0.4785 | | 0.4701 | 445.0 | 22695 | 0.4753 | | 0.4764 | 446.0 | 22746 | 0.4787 | | 0.4729 | 447.0 | 22797 | 0.4824 | | 0.4726 | 448.0 | 22848 | 0.4742 | | 0.4736 | 449.0 | 22899 | 0.4775 | | 0.4764 | 450.0 | 22950 | 0.4755 | | 0.4701 | 451.0 | 23001 | 0.4755 | | 0.4746 | 452.0 | 23052 | 0.4750 | | 0.4727 | 453.0 | 23103 | 0.4731 | | 0.4691 | 454.0 | 23154 | 0.4686 | | 0.4673 | 455.0 | 23205 | 0.4761 | | 0.4726 | 456.0 | 23256 | 0.4763 | | 0.4726 | 457.0 | 23307 | 0.4807 | | 0.4696 | 458.0 | 23358 | 0.4738 | | 0.4689 | 459.0 | 23409 | 0.4727 | | 0.4702 | 460.0 | 23460 | 0.4793 | | 0.4692 | 461.0 | 23511 | 0.4696 | | 0.4694 | 462.0 | 23562 | 0.4713 | | 0.4628 | 463.0 | 23613 | 0.4747 | | 0.4677 | 464.0 | 23664 | 0.4787 | | 0.4673 | 465.0 | 23715 | 0.4682 | | 0.4709 | 466.0 | 23766 | 0.4692 | | 0.463 | 467.0 | 23817 | 0.4676 | | 0.4654 | 468.0 | 23868 | 0.4696 | | 0.4648 | 469.0 | 23919 | 0.4675 | | 0.4642 | 470.0 | 23970 | 0.4700 | | 0.4687 | 471.0 | 24021 | 0.4691 | | 0.469 | 472.0 | 24072 | 0.4749 | | 0.4692 | 473.0 | 24123 | 0.4672 | | 0.4635 | 474.0 | 24174 | 0.4707 | | 0.4635 | 475.0 | 24225 | 0.4696 | | 0.4655 | 476.0 | 24276 | 0.4652 | | 0.4633 | 477.0 | 24327 | 0.4702 | | 0.4622 | 478.0 | 24378 | 0.4637 | | 0.4571 | 479.0 | 24429 | 0.4678 | | 0.4645 | 480.0 | 24480 | 0.4635 | | 0.4654 | 481.0 | 24531 | 0.4655 | | 0.4588 | 482.0 | 24582 | 0.4688 | | 0.4608 | 483.0 | 24633 | 0.4639 | | 0.4606 | 484.0 | 24684 | 0.4654 | | 0.4624 | 485.0 | 24735 | 0.4661 | | 0.4612 | 486.0 | 24786 | 0.4669 | | 0.46 | 487.0 | 24837 | 0.4653 | | 0.4623 | 488.0 | 24888 | 0.4688 | | 0.4648 | 489.0 | 24939 | 0.4648 | | 0.4602 | 490.0 | 24990 | 0.4620 | | 0.4587 | 491.0 | 25041 | 0.4652 | | 0.4627 | 492.0 | 25092 | 0.4694 | | 0.4638 | 493.0 | 25143 | 0.4620 | | 0.4565 | 494.0 | 25194 | 0.4653 | | 0.4588 | 495.0 | 25245 | 0.4598 | | 0.4568 | 496.0 | 25296 | 0.4617 | | 0.4524 | 497.0 | 25347 | 0.4631 | | 0.4635 | 498.0 | 25398 | 0.4640 | | 0.4534 | 499.0 | 25449 | 0.4643 | | 0.4599 | 500.0 | 25500 | 0.4663 | | 0.4549 | 501.0 | 25551 | 0.4588 | | 0.4595 | 502.0 | 25602 | 0.4661 | | 0.46 | 503.0 | 25653 | 0.4626 | | 0.4504 | 504.0 | 25704 | 0.4591 | | 0.459 | 505.0 | 25755 | 0.4623 | | 0.4582 | 506.0 | 25806 | 0.4617 | | 0.4532 | 507.0 | 25857 | 0.4580 | | 0.4555 | 508.0 | 25908 | 0.4615 | | 0.4571 | 509.0 | 25959 | 0.4617 | | 0.4561 | 510.0 | 26010 | 0.4579 | | 0.4541 | 511.0 | 26061 | 0.4601 | | 0.4534 | 512.0 | 26112 | 0.4627 | | 0.4569 | 513.0 | 26163 | 0.4615 | | 0.4583 | 514.0 | 26214 | 0.4527 | | 0.4498 | 515.0 | 26265 | 0.4587 | | 0.4511 | 516.0 | 26316 | 0.4552 | | 0.4535 | 517.0 | 26367 | 0.4579 | | 0.4551 | 518.0 | 26418 | 0.4543 | | 0.4581 | 519.0 | 26469 | 0.4597 | | 0.4573 | 520.0 | 26520 | 0.4540 | | 0.4495 | 521.0 | 26571 | 0.4578 | | 0.4532 | 522.0 | 26622 | 0.4605 | | 0.4474 | 523.0 | 26673 | 0.4579 | | 0.4504 | 524.0 | 26724 | 0.4563 | | 0.4529 | 525.0 | 26775 | 0.4583 | | 0.4475 | 526.0 | 26826 | 0.4616 | | 0.4457 | 527.0 | 26877 | 0.4558 | | 0.4532 | 528.0 | 26928 | 0.4584 | | 0.4566 | 529.0 | 26979 | 0.4573 | | 0.4546 | 530.0 | 27030 | 0.4563 | | 0.4479 | 531.0 | 27081 | 0.4628 | | 0.4485 | 532.0 | 27132 | 0.4547 | | 0.4491 | 533.0 | 27183 | 0.4539 | | 0.4522 | 534.0 | 27234 | 0.4536 | | 0.4477 | 535.0 | 27285 | 0.4561 | | 0.45 | 536.0 | 27336 | 0.4530 | | 0.4522 | 537.0 | 27387 | 0.4525 | | 0.4475 | 538.0 | 27438 | 0.4554 | | 0.4475 | 539.0 | 27489 | 0.4486 | | 0.4512 | 540.0 | 27540 | 0.4584 | | 0.445 | 541.0 | 27591 | 0.4543 | | 0.4478 | 542.0 | 27642 | 0.4507 | | 0.4472 | 543.0 | 27693 | 0.4520 | | 0.448 | 544.0 | 27744 | 0.4507 | | 0.4447 | 545.0 | 27795 | 0.4514 | | 0.4485 | 546.0 | 27846 | 0.4553 | | 0.4482 | 547.0 | 27897 | 0.4532 | | 0.4448 | 548.0 | 27948 | 0.4533 | | 0.4467 | 549.0 | 27999 | 0.4511 | | 0.4473 | 550.0 | 28050 | 0.4531 | | 0.4423 | 551.0 | 28101 | 0.4462 | | 0.4473 | 552.0 | 28152 | 0.4538 | | 0.4463 | 553.0 | 28203 | 0.4472 | | 0.4459 | 554.0 | 28254 | 0.4486 | | 0.4432 | 555.0 | 28305 | 0.4470 | | 0.4448 | 556.0 | 28356 | 0.4522 | | 0.4406 | 557.0 | 28407 | 0.4528 | | 0.4433 | 558.0 | 28458 | 0.4502 | | 0.4447 | 559.0 | 28509 | 0.4471 | | 0.4438 | 560.0 | 28560 | 0.4500 | | 0.4433 | 561.0 | 28611 | 0.4471 | | 0.4412 | 562.0 | 28662 | 0.4491 | | 0.4357 | 563.0 | 28713 | 0.4474 | | 0.4424 | 564.0 | 28764 | 0.4481 | | 0.4412 | 565.0 | 28815 | 0.4480 | | 0.4483 | 566.0 | 28866 | 0.4453 | | 0.4397 | 567.0 | 28917 | 0.4435 | | 0.4377 | 568.0 | 28968 | 0.4460 | | 0.4424 | 569.0 | 29019 | 0.4475 | | 0.4412 | 570.0 | 29070 | 0.4445 | | 0.4435 | 571.0 | 29121 | 0.4418 | | 0.4398 | 572.0 | 29172 | 0.4434 | | 0.4427 | 573.0 | 29223 | 0.4417 | | 0.4409 | 574.0 | 29274 | 0.4410 | | 0.4425 | 575.0 | 29325 | 0.4434 | | 0.4402 | 576.0 | 29376 | 0.4489 | | 0.4394 | 577.0 | 29427 | 0.4435 | | 0.4379 | 578.0 | 29478 | 0.4447 | | 0.4391 | 579.0 | 29529 | 0.4471 | | 0.4404 | 580.0 | 29580 | 0.4435 | | 0.4399 | 581.0 | 29631 | 0.4411 | | 0.4353 | 582.0 | 29682 | 0.4416 | | 0.4417 | 583.0 | 29733 | 0.4417 | | 0.4389 | 584.0 | 29784 | 0.4399 | | 0.4378 | 585.0 | 29835 | 0.4432 | | 0.439 | 586.0 | 29886 | 0.4427 | | 0.431 | 587.0 | 29937 | 0.4403 | | 0.4348 | 588.0 | 29988 | 0.4409 | | 0.4363 | 589.0 | 30039 | 0.4425 | | 0.4399 | 590.0 | 30090 | 0.4394 | | 0.4342 | 591.0 | 30141 | 0.4412 | | 0.4342 | 592.0 | 30192 | 0.4399 | | 0.4348 | 593.0 | 30243 | 0.4420 | | 0.4326 | 594.0 | 30294 | 0.4446 | | 0.4333 | 595.0 | 30345 | 0.4430 | | 0.4336 | 596.0 | 30396 | 0.4397 | | 0.4314 | 597.0 | 30447 | 0.4418 | | 0.4371 | 598.0 | 30498 | 0.4411 | | 0.4333 | 599.0 | 30549 | 0.4385 | | 0.4337 | 600.0 | 30600 | 0.4394 | | 0.4371 | 601.0 | 30651 | 0.4407 | | 0.4294 | 602.0 | 30702 | 0.4395 | | 0.4323 | 603.0 | 30753 | 0.4404 | | 0.4303 | 604.0 | 30804 | 0.4422 | | 0.4325 | 605.0 | 30855 | 0.4376 | | 0.44 | 606.0 | 30906 | 0.4399 | | 0.4343 | 607.0 | 30957 | 0.4403 | | 0.4313 | 608.0 | 31008 | 0.4397 | | 0.4338 | 609.0 | 31059 | 0.4379 | | 0.4299 | 610.0 | 31110 | 0.4349 | | 0.4325 | 611.0 | 31161 | 0.4370 | | 0.429 | 612.0 | 31212 | 0.4371 | | 0.4291 | 613.0 | 31263 | 0.4299 | | 0.4349 | 614.0 | 31314 | 0.4364 | | 0.4308 | 615.0 | 31365 | 0.4336 | | 0.4305 | 616.0 | 31416 | 0.4343 | | 0.4267 | 617.0 | 31467 | 0.4391 | | 0.4329 | 618.0 | 31518 | 0.4365 | | 0.4269 | 619.0 | 31569 | 0.4333 | | 0.4251 | 620.0 | 31620 | 0.4343 | | 0.427 | 621.0 | 31671 | 0.4344 | | 0.4327 | 622.0 | 31722 | 0.4345 | | 0.4263 | 623.0 | 31773 | 0.4370 | | 0.4288 | 624.0 | 31824 | 0.4323 | | 0.4316 | 625.0 | 31875 | 0.4325 | | 0.431 | 626.0 | 31926 | 0.4328 | | 0.4316 | 627.0 | 31977 | 0.4316 | | 0.4325 | 628.0 | 32028 | 0.4311 | | 0.4287 | 629.0 | 32079 | 0.4323 | | 0.4267 | 630.0 | 32130 | 0.4302 | | 0.426 | 631.0 | 32181 | 0.4342 | | 0.4259 | 632.0 | 32232 | 0.4324 | | 0.427 | 633.0 | 32283 | 0.4315 | | 0.4268 | 634.0 | 32334 | 0.4300 | | 0.4251 | 635.0 | 32385 | 0.4385 | | 0.4291 | 636.0 | 32436 | 0.4358 | | 0.4273 | 637.0 | 32487 | 0.4342 | | 0.4238 | 638.0 | 32538 | 0.4311 | | 0.4262 | 639.0 | 32589 | 0.4327 | | 0.4251 | 640.0 | 32640 | 0.4329 | | 0.4276 | 641.0 | 32691 | 0.4344 | | 0.4274 | 642.0 | 32742 | 0.4304 | | 0.4269 | 643.0 | 32793 | 0.4263 | | 0.4217 | 644.0 | 32844 | 0.4305 | | 0.4204 | 645.0 | 32895 | 0.4314 | | 0.4268 | 646.0 | 32946 | 0.4284 | | 0.4227 | 647.0 | 32997 | 0.4281 | | 0.4236 | 648.0 | 33048 | 0.4320 | | 0.4245 | 649.0 | 33099 | 0.4295 | | 0.4229 | 650.0 | 33150 | 0.4262 | | 0.423 | 651.0 | 33201 | 0.4239 | | 0.4209 | 652.0 | 33252 | 0.4294 | | 0.4209 | 653.0 | 33303 | 0.4315 | | 0.425 | 654.0 | 33354 | 0.4299 | | 0.418 | 655.0 | 33405 | 0.4282 | | 0.423 | 656.0 | 33456 | 0.4264 | | 0.4267 | 657.0 | 33507 | 0.4296 | | 0.4226 | 658.0 | 33558 | 0.4269 | | 0.4213 | 659.0 | 33609 | 0.4296 | | 0.4192 | 660.0 | 33660 | 0.4259 | | 0.4234 | 661.0 | 33711 | 0.4243 | | 0.4205 | 662.0 | 33762 | 0.4256 | | 0.4185 | 663.0 | 33813 | 0.4251 | | 0.4212 | 664.0 | 33864 | 0.4231 | | 0.4228 | 665.0 | 33915 | 0.4250 | | 0.421 | 666.0 | 33966 | 0.4284 | | 0.4226 | 667.0 | 34017 | 0.4243 | | 0.4201 | 668.0 | 34068 | 0.4279 | | 0.4213 | 669.0 | 34119 | 0.4210 | | 0.4237 | 670.0 | 34170 | 0.4264 | | 0.4228 | 671.0 | 34221 | 0.4237 | | 0.4181 | 672.0 | 34272 | 0.4245 | | 0.4242 | 673.0 | 34323 | 0.4244 | | 0.4178 | 674.0 | 34374 | 0.4250 | | 0.4184 | 675.0 | 34425 | 0.4274 | | 0.4163 | 676.0 | 34476 | 0.4221 | | 0.4288 | 677.0 | 34527 | 0.4245 | | 0.4205 | 678.0 | 34578 | 0.4258 | | 0.4167 | 679.0 | 34629 | 0.4243 | | 0.4172 | 680.0 | 34680 | 0.4241 | | 0.4212 | 681.0 | 34731 | 0.4216 | | 0.4164 | 682.0 | 34782 | 0.4214 | | 0.4171 | 683.0 | 34833 | 0.4230 | | 0.4166 | 684.0 | 34884 | 0.4261 | | 0.4172 | 685.0 | 34935 | 0.4224 | | 0.4188 | 686.0 | 34986 | 0.4209 | | 0.4187 | 687.0 | 35037 | 0.4168 | | 0.4174 | 688.0 | 35088 | 0.4201 | | 0.4184 | 689.0 | 35139 | 0.4177 | | 0.4126 | 690.0 | 35190 | 0.4192 | | 0.4168 | 691.0 | 35241 | 0.4171 | | 0.4152 | 692.0 | 35292 | 0.4202 | | 0.4137 | 693.0 | 35343 | 0.4210 | | 0.4139 | 694.0 | 35394 | 0.4143 | | 0.418 | 695.0 | 35445 | 0.4250 | | 0.4116 | 696.0 | 35496 | 0.4237 | | 0.4113 | 697.0 | 35547 | 0.4172 | | 0.4131 | 698.0 | 35598 | 0.4219 | | 0.4148 | 699.0 | 35649 | 0.4179 | | 0.4117 | 700.0 | 35700 | 0.4264 | | 0.4115 | 701.0 | 35751 | 0.4244 | | 0.4149 | 702.0 | 35802 | 0.4223 | | 0.4129 | 703.0 | 35853 | 0.4190 | | 0.4134 | 704.0 | 35904 | 0.4197 | | 0.4155 | 705.0 | 35955 | 0.4203 | | 0.4112 | 706.0 | 36006 | 0.4206 | | 0.4113 | 707.0 | 36057 | 0.4176 | | 0.4117 | 708.0 | 36108 | 0.4202 | | 0.4128 | 709.0 | 36159 | 0.4186 | | 0.4111 | 710.0 | 36210 | 0.4196 | | 0.4168 | 711.0 | 36261 | 0.4225 | | 0.408 | 712.0 | 36312 | 0.4146 | | 0.4117 | 713.0 | 36363 | 0.4185 | | 0.4089 | 714.0 | 36414 | 0.4214 | | 0.408 | 715.0 | 36465 | 0.4196 | | 0.4126 | 716.0 | 36516 | 0.4175 | | 0.4106 | 717.0 | 36567 | 0.4145 | | 0.4112 | 718.0 | 36618 | 0.4160 | | 0.4064 | 719.0 | 36669 | 0.4175 | | 0.41 | 720.0 | 36720 | 0.4181 | | 0.4046 | 721.0 | 36771 | 0.4159 | | 0.4141 | 722.0 | 36822 | 0.4119 | | 0.414 | 723.0 | 36873 | 0.4167 | | 0.4118 | 724.0 | 36924 | 0.4166 | | 0.4106 | 725.0 | 36975 | 0.4157 | | 0.4079 | 726.0 | 37026 | 0.4176 | | 0.4114 | 727.0 | 37077 | 0.4108 | | 0.4117 | 728.0 | 37128 | 0.4135 | | 0.4155 | 729.0 | 37179 | 0.4171 | | 0.4117 | 730.0 | 37230 | 0.4147 | | 0.4092 | 731.0 | 37281 | 0.4094 | | 0.4091 | 732.0 | 37332 | 0.4133 | | 0.4081 | 733.0 | 37383 | 0.4142 | | 0.4084 | 734.0 | 37434 | 0.4170 | | 0.4082 | 735.0 | 37485 | 0.4158 | | 0.4097 | 736.0 | 37536 | 0.4118 | | 0.4082 | 737.0 | 37587 | 0.4105 | | 0.4043 | 738.0 | 37638 | 0.4162 | | 0.4011 | 739.0 | 37689 | 0.4122 | | 0.4082 | 740.0 | 37740 | 0.4158 | | 0.4098 | 741.0 | 37791 | 0.4153 | | 0.4082 | 742.0 | 37842 | 0.4107 | | 0.4073 | 743.0 | 37893 | 0.4117 | | 0.403 | 744.0 | 37944 | 0.4163 | | 0.4024 | 745.0 | 37995 | 0.4080 | | 0.4098 | 746.0 | 38046 | 0.4082 | | 0.4072 | 747.0 | 38097 | 0.4111 | | 0.4065 | 748.0 | 38148 | 0.4119 | | 0.404 | 749.0 | 38199 | 0.4087 | | 0.4024 | 750.0 | 38250 | 0.4093 | | 0.4054 | 751.0 | 38301 | 0.4111 | | 0.403 | 752.0 | 38352 | 0.4093 | | 0.4042 | 753.0 | 38403 | 0.4117 | | 0.4025 | 754.0 | 38454 | 0.4088 | | 0.4025 | 755.0 | 38505 | 0.4102 | | 0.4056 | 756.0 | 38556 | 0.4135 | | 0.4025 | 757.0 | 38607 | 0.4125 | | 0.4035 | 758.0 | 38658 | 0.4110 | | 0.4026 | 759.0 | 38709 | 0.4127 | | 0.4028 | 760.0 | 38760 | 0.4107 | | 0.4007 | 761.0 | 38811 | 0.4079 | | 0.4043 | 762.0 | 38862 | 0.4106 | | 0.3979 | 763.0 | 38913 | 0.4084 | | 0.4071 | 764.0 | 38964 | 0.4093 | | 0.4097 | 765.0 | 39015 | 0.4130 | | 0.4052 | 766.0 | 39066 | 0.4118 | | 0.4063 | 767.0 | 39117 | 0.4055 | | 0.4051 | 768.0 | 39168 | 0.4056 | | 0.403 | 769.0 | 39219 | 0.4054 | | 0.4061 | 770.0 | 39270 | 0.4102 | | 0.3989 | 771.0 | 39321 | 0.4141 | | 0.4022 | 772.0 | 39372 | 0.4050 | | 0.4018 | 773.0 | 39423 | 0.4098 | | 0.3993 | 774.0 | 39474 | 0.4090 | | 0.3984 | 775.0 | 39525 | 0.4074 | | 0.4034 | 776.0 | 39576 | 0.4068 | | 0.4036 | 777.0 | 39627 | 0.4043 | | 0.4027 | 778.0 | 39678 | 0.4056 | | 0.3999 | 779.0 | 39729 | 0.4104 | | 0.401 | 780.0 | 39780 | 0.4033 | | 0.4058 | 781.0 | 39831 | 0.4058 | | 0.3977 | 782.0 | 39882 | 0.4094 | | 0.402 | 783.0 | 39933 | 0.4057 | | 0.3972 | 784.0 | 39984 | 0.4044 | | 0.3997 | 785.0 | 40035 | 0.4075 | | 0.4003 | 786.0 | 40086 | 0.4074 | | 0.3973 | 787.0 | 40137 | 0.4045 | | 0.3989 | 788.0 | 40188 | 0.4078 | | 0.4029 | 789.0 | 40239 | 0.4092 | | 0.4011 | 790.0 | 40290 | 0.4051 | | 0.3975 | 791.0 | 40341 | 0.4008 | | 0.3952 | 792.0 | 40392 | 0.4049 | | 0.4032 | 793.0 | 40443 | 0.4054 | | 0.4027 | 794.0 | 40494 | 0.4034 | | 0.397 | 795.0 | 40545 | 0.4042 | | 0.3941 | 796.0 | 40596 | 0.4030 | | 0.3929 | 797.0 | 40647 | 0.4031 | | 0.4016 | 798.0 | 40698 | 0.4003 | | 0.3926 | 799.0 | 40749 | 0.4026 | | 0.3985 | 800.0 | 40800 | 0.4046 | | 0.3978 | 801.0 | 40851 | 0.4002 | | 0.3972 | 802.0 | 40902 | 0.4058 | | 0.3993 | 803.0 | 40953 | 0.4026 | | 0.3935 | 804.0 | 41004 | 0.4049 | | 0.3973 | 805.0 | 41055 | 0.3989 | | 0.4002 | 806.0 | 41106 | 0.4003 | | 0.3918 | 807.0 | 41157 | 0.4006 | | 0.4001 | 808.0 | 41208 | 0.3997 | | 0.397 | 809.0 | 41259 | 0.4018 | | 0.3984 | 810.0 | 41310 | 0.4030 | | 0.3925 | 811.0 | 41361 | 0.4074 | | 0.398 | 812.0 | 41412 | 0.4032 | | 0.4 | 813.0 | 41463 | 0.3987 | | 0.3943 | 814.0 | 41514 | 0.4015 | | 0.3973 | 815.0 | 41565 | 0.3962 | | 0.3922 | 816.0 | 41616 | 0.4032 | | 0.3902 | 817.0 | 41667 | 0.3993 | | 0.3942 | 818.0 | 41718 | 0.4018 | | 0.3994 | 819.0 | 41769 | 0.4031 | | 0.3959 | 820.0 | 41820 | 0.4008 | | 0.3911 | 821.0 | 41871 | 0.4036 | | 0.3941 | 822.0 | 41922 | 0.3997 | | 0.3936 | 823.0 | 41973 | 0.3971 | | 0.397 | 824.0 | 42024 | 0.4011 | | 0.3974 | 825.0 | 42075 | 0.3964 | | 0.3921 | 826.0 | 42126 | 0.4010 | | 0.3961 | 827.0 | 42177 | 0.4019 | | 0.3912 | 828.0 | 42228 | 0.4004 | | 0.3939 | 829.0 | 42279 | 0.3980 | | 0.3917 | 830.0 | 42330 | 0.4027 | | 0.3977 | 831.0 | 42381 | 0.4005 | | 0.3881 | 832.0 | 42432 | 0.3983 | | 0.3939 | 833.0 | 42483 | 0.4026 | | 0.393 | 834.0 | 42534 | 0.3991 | | 0.3928 | 835.0 | 42585 | 0.3980 | | 0.394 | 836.0 | 42636 | 0.3953 | | 0.3908 | 837.0 | 42687 | 0.4002 | | 0.3926 | 838.0 | 42738 | 0.4015 | | 0.3947 | 839.0 | 42789 | 0.3991 | | 0.3965 | 840.0 | 42840 | 0.3969 | | 0.3934 | 841.0 | 42891 | 0.4002 | | 0.3916 | 842.0 | 42942 | 0.3969 | | 0.3887 | 843.0 | 42993 | 0.3941 | | 0.3938 | 844.0 | 43044 | 0.3972 | | 0.3928 | 845.0 | 43095 | 0.4015 | | 0.3948 | 846.0 | 43146 | 0.3976 | | 0.3925 | 847.0 | 43197 | 0.3953 | | 0.3876 | 848.0 | 43248 | 0.3958 | | 0.3857 | 849.0 | 43299 | 0.3967 | | 0.389 | 850.0 | 43350 | 0.3975 | | 0.3905 | 851.0 | 43401 | 0.3916 | | 0.389 | 852.0 | 43452 | 0.3987 | | 0.3872 | 853.0 | 43503 | 0.3965 | | 0.3902 | 854.0 | 43554 | 0.3963 | | 0.3883 | 855.0 | 43605 | 0.3941 | | 0.393 | 856.0 | 43656 | 0.3945 | | 0.3908 | 857.0 | 43707 | 0.3987 | | 0.3891 | 858.0 | 43758 | 0.3970 | | 0.39 | 859.0 | 43809 | 0.3934 | | 0.3894 | 860.0 | 43860 | 0.3981 | | 0.3859 | 861.0 | 43911 | 0.3940 | | 0.3896 | 862.0 | 43962 | 0.3956 | | 0.3897 | 863.0 | 44013 | 0.3952 | | 0.385 | 864.0 | 44064 | 0.3941 | | 0.3876 | 865.0 | 44115 | 0.3937 | | 0.3889 | 866.0 | 44166 | 0.3975 | | 0.3926 | 867.0 | 44217 | 0.3953 | | 0.3895 | 868.0 | 44268 | 0.3918 | | 0.3926 | 869.0 | 44319 | 0.3926 | | 0.3861 | 870.0 | 44370 | 0.3933 | | 0.3881 | 871.0 | 44421 | 0.3941 | | 0.3863 | 872.0 | 44472 | 0.3939 | | 0.3863 | 873.0 | 44523 | 0.3913 | | 0.386 | 874.0 | 44574 | 0.3919 | | 0.382 | 875.0 | 44625 | 0.3879 | | 0.384 | 876.0 | 44676 | 0.3938 | | 0.3898 | 877.0 | 44727 | 0.3949 | | 0.3913 | 878.0 | 44778 | 0.3947 | | 0.3859 | 879.0 | 44829 | 0.3952 | | 0.385 | 880.0 | 44880 | 0.3950 | | 0.3872 | 881.0 | 44931 | 0.3877 | | 0.383 | 882.0 | 44982 | 0.3905 | | 0.387 | 883.0 | 45033 | 0.3939 | | 0.3834 | 884.0 | 45084 | 0.3947 | | 0.3866 | 885.0 | 45135 | 0.3935 | | 0.3834 | 886.0 | 45186 | 0.3925 | | 0.3848 | 887.0 | 45237 | 0.3903 | | 0.3896 | 888.0 | 45288 | 0.3918 | | 0.3863 | 889.0 | 45339 | 0.3880 | | 0.384 | 890.0 | 45390 | 0.3884 | | 0.3844 | 891.0 | 45441 | 0.3907 | | 0.3863 | 892.0 | 45492 | 0.3954 | | 0.3872 | 893.0 | 45543 | 0.3919 | | 0.3869 | 894.0 | 45594 | 0.3928 | | 0.3801 | 895.0 | 45645 | 0.3941 | | 0.3832 | 896.0 | 45696 | 0.3930 | | 0.3886 | 897.0 | 45747 | 0.3933 | | 0.3871 | 898.0 | 45798 | 0.3917 | | 0.3892 | 899.0 | 45849 | 0.3927 | | 0.3864 | 900.0 | 45900 | 0.3934 | | 0.3827 | 901.0 | 45951 | 0.3916 | | 0.3838 | 902.0 | 46002 | 0.3932 | | 0.3859 | 903.0 | 46053 | 0.3901 | | 0.382 | 904.0 | 46104 | 0.3918 | | 0.3824 | 905.0 | 46155 | 0.3939 | | 0.3799 | 906.0 | 46206 | 0.3907 | | 0.3851 | 907.0 | 46257 | 0.3891 | | 0.3854 | 908.0 | 46308 | 0.3885 | | 0.3855 | 909.0 | 46359 | 0.3912 | | 0.3855 | 910.0 | 46410 | 0.3912 | | 0.3799 | 911.0 | 46461 | 0.3882 | | 0.387 | 912.0 | 46512 | 0.3894 | | 0.3792 | 913.0 | 46563 | 0.3887 | | 0.3831 | 914.0 | 46614 | 0.3875 | | 0.3821 | 915.0 | 46665 | 0.3863 | | 0.3853 | 916.0 | 46716 | 0.3884 | | 0.381 | 917.0 | 46767 | 0.3873 | | 0.3847 | 918.0 | 46818 | 0.3850 | | 0.3813 | 919.0 | 46869 | 0.3875 | | 0.3853 | 920.0 | 46920 | 0.3860 | | 0.3849 | 921.0 | 46971 | 0.3880 | | 0.3771 | 922.0 | 47022 | 0.3891 | | 0.3815 | 923.0 | 47073 | 0.3887 | | 0.3827 | 924.0 | 47124 | 0.3902 | | 0.3828 | 925.0 | 47175 | 0.3900 | | 0.3861 | 926.0 | 47226 | 0.3915 | | 0.383 | 927.0 | 47277 | 0.3911 | | 0.3785 | 928.0 | 47328 | 0.3837 | | 0.3825 | 929.0 | 47379 | 0.3879 | | 0.3793 | 930.0 | 47430 | 0.3921 | | 0.3836 | 931.0 | 47481 | 0.3893 | | 0.3858 | 932.0 | 47532 | 0.3874 | | 0.387 | 933.0 | 47583 | 0.3881 | | 0.3855 | 934.0 | 47634 | 0.3863 | | 0.3813 | 935.0 | 47685 | 0.3833 | | 0.3787 | 936.0 | 47736 | 0.3876 | | 0.3834 | 937.0 | 47787 | 0.3870 | | 0.3807 | 938.0 | 47838 | 0.3839 | | 0.3788 | 939.0 | 47889 | 0.3863 | | 0.3788 | 940.0 | 47940 | 0.3847 | | 0.3819 | 941.0 | 47991 | 0.3876 | | 0.3814 | 942.0 | 48042 | 0.3845 | | 0.3817 | 943.0 | 48093 | 0.3830 | | 0.3838 | 944.0 | 48144 | 0.3880 | | 0.3787 | 945.0 | 48195 | 0.3880 | | 0.3812 | 946.0 | 48246 | 0.3884 | | 0.3806 | 947.0 | 48297 | 0.3891 | | 0.3816 | 948.0 | 48348 | 0.3855 | | 0.3813 | 949.0 | 48399 | 0.3847 | | 0.3811 | 950.0 | 48450 | 0.3847 | | 0.3776 | 951.0 | 48501 | 0.3831 | | 0.3794 | 952.0 | 48552 | 0.3867 | | 0.3782 | 953.0 | 48603 | 0.3812 | | 0.3834 | 954.0 | 48654 | 0.3852 | | 0.3785 | 955.0 | 48705 | 0.3830 | | 0.3789 | 956.0 | 48756 | 0.3852 | | 0.3801 | 957.0 | 48807 | 0.3882 | | 0.3771 | 958.0 | 48858 | 0.3842 | | 0.3808 | 959.0 | 48909 | 0.3840 | | 0.3762 | 960.0 | 48960 | 0.3849 | | 0.3777 | 961.0 | 49011 | 0.3842 | | 0.3781 | 962.0 | 49062 | 0.3874 | | 0.3781 | 963.0 | 49113 | 0.3838 | | 0.376 | 964.0 | 49164 | 0.3863 | | 0.3777 | 965.0 | 49215 | 0.3827 | | 0.3808 | 966.0 | 49266 | 0.3853 | | 0.3835 | 967.0 | 49317 | 0.3869 | | 0.3801 | 968.0 | 49368 | 0.3859 | | 0.3839 | 969.0 | 49419 | 0.3841 | | 0.3768 | 970.0 | 49470 | 0.3849 | | 0.3797 | 971.0 | 49521 | 0.3844 | | 0.3763 | 972.0 | 49572 | 0.3855 | | 0.3788 | 973.0 | 49623 | 0.3832 | | 0.374 | 974.0 | 49674 | 0.3858 | | 0.3785 | 975.0 | 49725 | 0.3805 | | 0.3752 | 976.0 | 49776 | 0.3855 | | 0.3752 | 977.0 | 49827 | 0.3827 | | 0.3779 | 978.0 | 49878 | 0.3826 | | 0.3769 | 979.0 | 49929 | 0.3824 | | 0.3778 | 980.0 | 49980 | 0.3848 | | 0.3749 | 981.0 | 50031 | 0.3831 | | 0.3756 | 982.0 | 50082 | 0.3879 | | 0.3739 | 983.0 | 50133 | 0.3830 | | 0.3769 | 984.0 | 50184 | 0.3845 | | 0.3737 | 985.0 | 50235 | 0.3894 | | 0.3769 | 986.0 | 50286 | 0.3815 | | 0.373 | 987.0 | 50337 | 0.3797 | | 0.374 | 988.0 | 50388 | 0.3827 | | 0.3778 | 989.0 | 50439 | 0.3844 | | 0.3773 | 990.0 | 50490 | 0.3846 | | 0.3759 | 991.0 | 50541 | 0.3826 | | 0.3752 | 992.0 | 50592 | 0.3843 | | 0.3747 | 993.0 | 50643 | 0.3817 | | 0.3781 | 994.0 | 50694 | 0.3784 | | 0.3751 | 995.0 | 50745 | 0.3832 | | 0.3758 | 996.0 | 50796 | 0.3800 | | 0.3718 | 997.0 | 50847 | 0.3837 | | 0.3745 | 998.0 | 50898 | 0.3823 | | 0.3757 | 999.0 | 50949 | 0.3798 | | 0.3786 | 1000.0 | 51000 | 0.3794 | | 0.3738 | 1001.0 | 51051 | 0.3781 | | 0.3779 | 1002.0 | 51102 | 0.3851 | | 0.3735 | 1003.0 | 51153 | 0.3844 | | 0.3753 | 1004.0 | 51204 | 0.3841 | | 0.3701 | 1005.0 | 51255 | 0.3805 | | 0.3738 | 1006.0 | 51306 | 0.3826 | | 0.3729 | 1007.0 | 51357 | 0.3793 | | 0.3765 | 1008.0 | 51408 | 0.3825 | | 0.3725 | 1009.0 | 51459 | 0.3817 | | 0.3766 | 1010.0 | 51510 | 0.3813 | | 0.3736 | 1011.0 | 51561 | 0.3834 | | 0.3747 | 1012.0 | 51612 | 0.3800 | | 0.3726 | 1013.0 | 51663 | 0.3817 | | 0.3819 | 1014.0 | 51714 | 0.3840 | | 0.3799 | 1015.0 | 51765 | 0.3834 | | 0.3754 | 1016.0 | 51816 | 0.3818 | | 0.3762 | 1017.0 | 51867 | 0.3769 | | 0.3718 | 1018.0 | 51918 | 0.3794 | | 0.3785 | 1019.0 | 51969 | 0.3825 | | 0.3754 | 1020.0 | 52020 | 0.3827 | | 0.374 | 1021.0 | 52071 | 0.3818 | | 0.3785 | 1022.0 | 52122 | 0.3780 | | 0.3735 | 1023.0 | 52173 | 0.3815 | | 0.3726 | 1024.0 | 52224 | 0.3794 | | 0.3798 | 1025.0 | 52275 | 0.3787 | | 0.3714 | 1026.0 | 52326 | 0.3810 | | 0.3776 | 1027.0 | 52377 | 0.3787 | | 0.3688 | 1028.0 | 52428 | 0.3771 | | 0.375 | 1029.0 | 52479 | 0.3776 | | 0.372 | 1030.0 | 52530 | 0.3795 | | 0.3736 | 1031.0 | 52581 | 0.3781 | | 0.3713 | 1032.0 | 52632 | 0.3815 | | 0.3772 | 1033.0 | 52683 | 0.3802 | | 0.375 | 1034.0 | 52734 | 0.3788 | | 0.3725 | 1035.0 | 52785 | 0.3819 | | 0.3696 | 1036.0 | 52836 | 0.3836 | | 0.3741 | 1037.0 | 52887 | 0.3814 | | 0.3734 | 1038.0 | 52938 | 0.3799 | | 0.3759 | 1039.0 | 52989 | 0.3789 | | 0.3726 | 1040.0 | 53040 | 0.3802 | | 0.3693 | 1041.0 | 53091 | 0.3769 | | 0.3705 | 1042.0 | 53142 | 0.3812 | | 0.3691 | 1043.0 | 53193 | 0.3806 | | 0.3736 | 1044.0 | 53244 | 0.3796 | | 0.3707 | 1045.0 | 53295 | 0.3784 | | 0.3735 | 1046.0 | 53346 | 0.3752 | | 0.3773 | 1047.0 | 53397 | 0.3801 | | 0.3714 | 1048.0 | 53448 | 0.3800 | | 0.3747 | 1049.0 | 53499 | 0.3787 | | 0.3735 | 1050.0 | 53550 | 0.3775 | | 0.3727 | 1051.0 | 53601 | 0.3771 | | 0.3736 | 1052.0 | 53652 | 0.3833 | | 0.3676 | 1053.0 | 53703 | 0.3796 | | 0.3688 | 1054.0 | 53754 | 0.3758 | | 0.369 | 1055.0 | 53805 | 0.3775 | | 0.3696 | 1056.0 | 53856 | 0.3811 | | 0.3707 | 1057.0 | 53907 | 0.3776 | | 0.3765 | 1058.0 | 53958 | 0.3804 | | 0.3697 | 1059.0 | 54009 | 0.3813 | | 0.3718 | 1060.0 | 54060 | 0.3722 | | 0.3699 | 1061.0 | 54111 | 0.3771 | | 0.3725 | 1062.0 | 54162 | 0.3780 | | 0.3705 | 1063.0 | 54213 | 0.3767 | | 0.3698 | 1064.0 | 54264 | 0.3783 | | 0.374 | 1065.0 | 54315 | 0.3775 | | 0.3665 | 1066.0 | 54366 | 0.3813 | | 0.3695 | 1067.0 | 54417 | 0.3801 | | 0.3705 | 1068.0 | 54468 | 0.3805 | | 0.3709 | 1069.0 | 54519 | 0.3780 | | 0.3762 | 1070.0 | 54570 | 0.3758 | | 0.3718 | 1071.0 | 54621 | 0.3801 | | 0.3736 | 1072.0 | 54672 | 0.3769 | | 0.3702 | 1073.0 | 54723 | 0.3763 | | 0.3716 | 1074.0 | 54774 | 0.3791 | | 0.3684 | 1075.0 | 54825 | 0.3745 | | 0.3682 | 1076.0 | 54876 | 0.3796 | | 0.3699 | 1077.0 | 54927 | 0.3784 | | 0.3745 | 1078.0 | 54978 | 0.3794 | | 0.3721 | 1079.0 | 55029 | 0.3780 | | 0.3758 | 1080.0 | 55080 | 0.3792 | | 0.3742 | 1081.0 | 55131 | 0.3781 | | 0.3693 | 1082.0 | 55182 | 0.3819 | | 0.3676 | 1083.0 | 55233 | 0.3746 | | 0.3684 | 1084.0 | 55284 | 0.3812 | | 0.3727 | 1085.0 | 55335 | 0.3745 | | 0.3689 | 1086.0 | 55386 | 0.3743 | | 0.3704 | 1087.0 | 55437 | 0.3785 | | 0.3664 | 1088.0 | 55488 | 0.3774 | | 0.3704 | 1089.0 | 55539 | 0.3757 | | 0.3702 | 1090.0 | 55590 | 0.3790 | | 0.3747 | 1091.0 | 55641 | 0.3798 | | 0.3704 | 1092.0 | 55692 | 0.3756 | | 0.3749 | 1093.0 | 55743 | 0.3783 | | 0.3686 | 1094.0 | 55794 | 0.3759 | | 0.369 | 1095.0 | 55845 | 0.3762 | | 0.3671 | 1096.0 | 55896 | 0.3783 | | 0.3686 | 1097.0 | 55947 | 0.3780 | | 0.3693 | 1098.0 | 55998 | 0.3778 | | 0.3728 | 1099.0 | 56049 | 0.3759 | | 0.3715 | 1100.0 | 56100 | 0.3777 | | 0.3712 | 1101.0 | 56151 | 0.3775 | | 0.3695 | 1102.0 | 56202 | 0.3767 | | 0.3715 | 1103.0 | 56253 | 0.3762 | | 0.3728 | 1104.0 | 56304 | 0.3775 | | 0.368 | 1105.0 | 56355 | 0.3783 | | 0.3705 | 1106.0 | 56406 | 0.3797 | | 0.3705 | 1107.0 | 56457 | 0.3771 | | 0.3734 | 1108.0 | 56508 | 0.3754 | | 0.3701 | 1109.0 | 56559 | 0.3793 | | 0.3707 | 1110.0 | 56610 | 0.3729 | | 0.3677 | 1111.0 | 56661 | 0.3763 | | 0.3734 | 1112.0 | 56712 | 0.3813 | | 0.3714 | 1113.0 | 56763 | 0.3772 | | 0.3654 | 1114.0 | 56814 | 0.3765 | | 0.3692 | 1115.0 | 56865 | 0.3757 | | 0.3721 | 1116.0 | 56916 | 0.3749 | | 0.3741 | 1117.0 | 56967 | 0.3769 | | 0.3649 | 1118.0 | 57018 | 0.3806 | | 0.3709 | 1119.0 | 57069 | 0.3720 | | 0.3721 | 1120.0 | 57120 | 0.3794 | | 0.3701 | 1121.0 | 57171 | 0.3748 | | 0.3674 | 1122.0 | 57222 | 0.3787 | | 0.3669 | 1123.0 | 57273 | 0.3736 | | 0.3726 | 1124.0 | 57324 | 0.3789 | | 0.3672 | 1125.0 | 57375 | 0.3774 | | 0.3674 | 1126.0 | 57426 | 0.3778 | | 0.3702 | 1127.0 | 57477 | 0.3772 | | 0.3717 | 1128.0 | 57528 | 0.3766 | | 0.3703 | 1129.0 | 57579 | 0.3757 | | 0.3695 | 1130.0 | 57630 | 0.3808 | | 0.3729 | 1131.0 | 57681 | 0.3721 | | 0.3657 | 1132.0 | 57732 | 0.3784 | | 0.3676 | 1133.0 | 57783 | 0.3793 | | 0.3684 | 1134.0 | 57834 | 0.3797 | | 0.3703 | 1135.0 | 57885 | 0.3771 | | 0.3705 | 1136.0 | 57936 | 0.3752 | | 0.3691 | 1137.0 | 57987 | 0.3773 | | 0.3673 | 1138.0 | 58038 | 0.3766 | | 0.3715 | 1139.0 | 58089 | 0.3779 | | 0.37 | 1140.0 | 58140 | 0.3750 | | 0.3709 | 1141.0 | 58191 | 0.3786 | | 0.3696 | 1142.0 | 58242 | 0.3776 | | 0.3752 | 1143.0 | 58293 | 0.3758 | | 0.3675 | 1144.0 | 58344 | 0.3762 | | 0.3681 | 1145.0 | 58395 | 0.3741 | | 0.3684 | 1146.0 | 58446 | 0.3794 | | 0.3663 | 1147.0 | 58497 | 0.3720 | | 0.3712 | 1148.0 | 58548 | 0.3742 | | 0.3672 | 1149.0 | 58599 | 0.3786 | | 0.369 | 1150.0 | 58650 | 0.3737 | | 0.3648 | 1151.0 | 58701 | 0.3767 | | 0.3704 | 1152.0 | 58752 | 0.3740 | | 0.3695 | 1153.0 | 58803 | 0.3781 | | 0.3707 | 1154.0 | 58854 | 0.3753 | | 0.3661 | 1155.0 | 58905 | 0.3774 | | 0.367 | 1156.0 | 58956 | 0.3763 | | 0.3657 | 1157.0 | 59007 | 0.3767 | | 0.3638 | 1158.0 | 59058 | 0.3738 | | 0.3728 | 1159.0 | 59109 | 0.3732 | | 0.3748 | 1160.0 | 59160 | 0.3787 | | 0.3753 | 1161.0 | 59211 | 0.3743 | | 0.3663 | 1162.0 | 59262 | 0.3758 | | 0.3694 | 1163.0 | 59313 | 0.3772 | | 0.3657 | 1164.0 | 59364 | 0.3763 | | 0.3643 | 1165.0 | 59415 | 0.3770 | | 0.3679 | 1166.0 | 59466 | 0.3772 | | 0.37 | 1167.0 | 59517 | 0.3724 | | 0.3693 | 1168.0 | 59568 | 0.3752 | | 0.3705 | 1169.0 | 59619 | 0.3732 | | 0.3671 | 1170.0 | 59670 | 0.3767 | | 0.3729 | 1171.0 | 59721 | 0.3723 | | 0.3701 | 1172.0 | 59772 | 0.3768 | | 0.3717 | 1173.0 | 59823 | 0.3782 | | 0.3716 | 1174.0 | 59874 | 0.3721 | | 0.3723 | 1175.0 | 59925 | 0.3712 | | 0.3674 | 1176.0 | 59976 | 0.3746 | | 0.365 | 1177.0 | 60027 | 0.3768 | | 0.3725 | 1178.0 | 60078 | 0.3760 | | 0.3679 | 1179.0 | 60129 | 0.3742 | | 0.3707 | 1180.0 | 60180 | 0.3753 | | 0.3698 | 1181.0 | 60231 | 0.3730 | | 0.3697 | 1182.0 | 60282 | 0.3748 | | 0.368 | 1183.0 | 60333 | 0.3722 | | 0.3689 | 1184.0 | 60384 | 0.3724 | | 0.3667 | 1185.0 | 60435 | 0.3731 | | 0.3708 | 1186.0 | 60486 | 0.3785 | | 0.3684 | 1187.0 | 60537 | 0.3755 | | 0.3701 | 1188.0 | 60588 | 0.3774 | | 0.3685 | 1189.0 | 60639 | 0.3733 | | 0.37 | 1190.0 | 60690 | 0.3773 | | 0.372 | 1191.0 | 60741 | 0.3761 | | 0.3677 | 1192.0 | 60792 | 0.3733 | | 0.367 | 1193.0 | 60843 | 0.3770 | | 0.3641 | 1194.0 | 60894 | 0.3731 | | 0.3679 | 1195.0 | 60945 | 0.3739 | | 0.3709 | 1196.0 | 60996 | 0.3731 | | 0.3668 | 1197.0 | 61047 | 0.3784 | | 0.3678 | 1198.0 | 61098 | 0.3754 | | 0.3642 | 1199.0 | 61149 | 0.3795 | | 0.3717 | 1200.0 | 61200 | 0.3766 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.0 - Datasets 2.8.0 - Tokenizers 0.13.2
BE/demo-sentiment2021
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-14T12:12:48Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-parsbert-uncased-ncbi_disease results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-parsbert-uncased-ncbi_disease This model is a fine-tuned version of [HooshvareLab/bert-base-parsbert-uncased](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) on the [ncbi-persian](https://huggingface.co/datasets/Amir13/ncbi-persian) dataset. It achieves the following results on the evaluation set: - Loss: 0.1018 - Precision: 0.8192 - Recall: 0.8645 - F1: 0.8412 - Accuracy: 0.9862 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 169 | 0.0648 | 0.7154 | 0.8237 | 0.7657 | 0.9813 | | No log | 2.0 | 338 | 0.0573 | 0.7870 | 0.8263 | 0.8062 | 0.9853 | | 0.0596 | 3.0 | 507 | 0.0639 | 0.7893 | 0.8776 | 0.8312 | 0.9858 | | 0.0596 | 4.0 | 676 | 0.0678 | 0.8150 | 0.8461 | 0.8302 | 0.9860 | | 0.0596 | 5.0 | 845 | 0.0737 | 0.8070 | 0.8474 | 0.8267 | 0.9862 | | 0.0065 | 6.0 | 1014 | 0.0834 | 0.8052 | 0.8592 | 0.8313 | 0.9856 | | 0.0065 | 7.0 | 1183 | 0.0918 | 0.8099 | 0.8355 | 0.8225 | 0.9859 | | 0.0065 | 8.0 | 1352 | 0.0882 | 0.8061 | 0.8697 | 0.8367 | 0.9857 | | 0.0021 | 9.0 | 1521 | 0.0903 | 0.8045 | 0.85 | 0.8266 | 0.9860 | | 0.0021 | 10.0 | 1690 | 0.0965 | 0.8303 | 0.85 | 0.8401 | 0.9866 | | 0.0021 | 11.0 | 1859 | 0.0954 | 0.8182 | 0.8645 | 0.8407 | 0.9860 | | 0.0008 | 12.0 | 2028 | 0.0998 | 0.8206 | 0.8605 | 0.8401 | 0.9862 | | 0.0008 | 13.0 | 2197 | 0.0995 | 0.82 | 0.8632 | 0.8410 | 0.9862 | | 0.0008 | 14.0 | 2366 | 0.1015 | 0.8214 | 0.8592 | 0.8399 | 0.9861 | | 0.0004 | 15.0 | 2535 | 0.1018 | 0.8192 | 0.8645 | 0.8412 | 0.9862 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2 ### Citation If you used the datasets and models in this repository, please cite it. ```bibtex @misc{https://doi.org/10.48550/arxiv.2302.09611, doi = {10.48550/ARXIV.2302.09611}, url = {https://arxiv.org/abs/2302.09611}, author = {Sartipi, Amir and Fatemi, Afsaneh}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English}, publisher = {arXiv}, year = {2023}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
BSC-LT/roberta-base-biomedical-clinical-es
[ "pytorch", "roberta", "fill-mask", "es", "arxiv:2109.03570", "arxiv:2109.07765", "transformers", "biomedical", "clinical", "spanish", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - eval_loss: 0.3158 - eval_accuracy: 0.902 - eval_f1: 0.8997 - eval_runtime: 102.1735 - eval_samples_per_second: 19.575 - eval_steps_per_second: 0.313 - epoch: 1.0 - step: 250 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cpu - Datasets 2.9.0 - Tokenizers 0.13.2
BSC-LT/roberta-base-bne-capitel-pos
[ "pytorch", "roberta", "token-classification", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "capitel", "pos", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
2023-02-14T12:25:14Z
--- license: apache-2.0 tags: - text2text-generation - generated_from_trainer metrics: - rouge model-index: - name: flan-t5-base10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base10 This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0434 - Rouge1: 18.1613 - Rouge2: 17.0556 - Rougel: 18.1408 - Rougelsum: 18.1449 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 0.5547 | 1.0 | 52 | 0.1037 | 18.1615 | 16.7672 | 18.0609 | 18.1437 | | 0.0925 | 2.0 | 104 | 0.0618 | 18.1562 | 17.051 | 18.1349 | 18.1449 | | 0.0642 | 3.0 | 156 | 0.0551 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0537 | 4.0 | 208 | 0.0499 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0485 | 5.0 | 260 | 0.0485 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0454 | 6.0 | 312 | 0.0481 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0435 | 7.0 | 364 | 0.0463 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.041 | 8.0 | 416 | 0.0458 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0401 | 9.0 | 468 | 0.0454 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0384 | 10.0 | 520 | 0.0462 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0369 | 11.0 | 572 | 0.0441 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0362 | 12.0 | 624 | 0.0444 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0357 | 13.0 | 676 | 0.0443 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0349 | 14.0 | 728 | 0.0429 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.034 | 15.0 | 780 | 0.0450 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0335 | 16.0 | 832 | 0.0438 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0332 | 17.0 | 884 | 0.0440 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0325 | 18.0 | 936 | 0.0436 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0322 | 19.0 | 988 | 0.0442 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0317 | 20.0 | 1040 | 0.0438 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0314 | 21.0 | 1092 | 0.0441 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0314 | 22.0 | 1144 | 0.0440 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0312 | 23.0 | 1196 | 0.0436 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.0312 | 24.0 | 1248 | 0.0433 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | | 0.031 | 25.0 | 1300 | 0.0434 | 18.1613 | 17.0556 | 18.1408 | 18.1449 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
BSC-LT/roberta-base-bne
[ "pytorch", "roberta", "fill-mask", "es", "dataset:bne", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
594
2023-02-14T12:26:06Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: justlotw/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
BSC-LT/roberta-base-ca
[ "pytorch", "roberta", "fill-mask", "ca", "transformers", "masked-lm", "BERTa", "catalan", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
18
2023-02-14T12:26:28Z
--- license: mit tags: - generated_from_trainer datasets: - imdb model-index: - name: gpt2-large-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-large-imdb This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
BSC-LT/roberta-large-bne-capitel-ner
[ "pytorch", "roberta", "token-classification", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "capitel", "ner", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image --- # ◆VaLMix ![a](Image/logo.png) - "VaLMix(ValentineMix)" is a merged model based on "pastel-mix". --- # ◆Discord [Join Discord Server](https://discord.gg/eN6aSWRddT) - The merged model community of Hemlok. ---- # 《Notice》 - **"VaLMix(Including "VaL-V2" and "VaLJ)" are no longer available for commercial use due to a change in the license of the merging source.** - Instead, we have created **"VaLMix2 series"** Please use them. ---- # ◆About - [日本語ReadMe](https://hemlok.notion.site/VaLMix-d7ebffdbd185435c8d71833f4c7f1d10) - Sampler: DDIM or DPM++ SDE Karras - Steps: 50~ - Clipskip: 2 - CFG Scale: 5-8 - Denoise strength: 0.5-0.7 - "Hires. fix" recommended. - Negative prompts should be as few as possible. ---- # ◆Model Types - Prompt: ``` kawaii, 1girl, (solo), (cowboy shot), (dynamic angle), Ruffled Dresses, (The great hall of the mansion), tiara, Luxurious interior, looking at viewer, ``` ![](VaLMixV2/Image/val2.png) --- ## ◇VaLMix2-MAIN ![](VaLMixV2/Image/v2-1.png) - VaLMix Remake --- ## ◇VaLMix2-EX ![](VaLMixV2/Image/v2-2.png) - VaLMixV2 Remake --- ## ◇MJVaL2 ![](VaLMixV2/Image/v2-3.png) - VaLMix2-EX + Openjourney-v4 --- # ◆How to use - Please download the file by yourself and use it with WebUI(AUTOMATIC1111) etc. - Use the fp16 version for Colab(T4) or a PC with low RAM. - The models are located in "Model/fp32" and "Model/fp16" respectively. ---- # Disclaimer - The creation of SFW and NSFW images is at the discretion of the individual creator. - This model is not a model created to publish NSFW content in public places, etc. ---- ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) (Full text of the license: https://huggingface.co/spaces/CompVis/stable-diffusion-license)
BSC-LT/roberta-large-bne-capitel-pos
[ "pytorch", "roberta", "token-classification", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "capitel", "pos", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
2023-02-14T12:32:05Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: pixelcopter3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 21.30 +/- 15.45 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
BSC-LT/roberta-large-bne-sqac
[ "pytorch", "roberta", "question-answering", "es", "dataset:BSC-TeMU/SQAC", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "qa", "question answering", "license:apache-2.0", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
2023-02-14T12:39:07Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Ili1991/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BSen/wav2vec2-base-timit-demo-colab
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2023-02-14T12:49:08Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Ili1991/q-Taxi-v1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BW/TEST
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
2023-02-14T12:53:34Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1566.90 +/- 80.69 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Babelscape/wikineural-multilingual-ner
[ "pytorch", "tensorboard", "safetensors", "bert", "token-classification", "de", "en", "es", "fr", "it", "nl", "pl", "pt", "ru", "multilingual", "dataset:Babelscape/wikineural", "transformers", "named-entity-recognition", "sequence-tagger-model", "license:cc-by-nc-sa-4.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
41,608
null
--- license: mit tags: - generated_from_trainer datasets: Amir13/ncbi-persian metrics: - precision - recall - f1 - accuracy model-index: - name: xlm-roberta-base-ncbi_disease results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-ncbi_disease This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [ncbi-persian](https://huggingface.co/datasets/Amir13/ncbi-persian) dataset. It achieves the following results on the evaluation set: - Loss: 0.0915 - Precision: 0.8273 - Recall: 0.8763 - F1: 0.8511 - Accuracy: 0.9866 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 169 | 0.0682 | 0.7049 | 0.7763 | 0.7389 | 0.9784 | | No log | 2.0 | 338 | 0.0575 | 0.7558 | 0.8592 | 0.8042 | 0.9832 | | 0.0889 | 3.0 | 507 | 0.0558 | 0.8092 | 0.8592 | 0.8334 | 0.9859 | | 0.0889 | 4.0 | 676 | 0.0595 | 0.8316 | 0.8579 | 0.8446 | 0.9858 | | 0.0889 | 5.0 | 845 | 0.0665 | 0.7998 | 0.8566 | 0.8272 | 0.9850 | | 0.0191 | 6.0 | 1014 | 0.0796 | 0.8229 | 0.85 | 0.8362 | 0.9862 | | 0.0191 | 7.0 | 1183 | 0.0783 | 0.8193 | 0.8474 | 0.8331 | 0.9860 | | 0.0191 | 8.0 | 1352 | 0.0792 | 0.8257 | 0.8539 | 0.8396 | 0.9864 | | 0.0079 | 9.0 | 1521 | 0.0847 | 0.8154 | 0.8658 | 0.8398 | 0.9851 | | 0.0079 | 10.0 | 1690 | 0.0855 | 0.8160 | 0.875 | 0.8444 | 0.9857 | | 0.0079 | 11.0 | 1859 | 0.0868 | 0.8081 | 0.8645 | 0.8353 | 0.9864 | | 0.0037 | 12.0 | 2028 | 0.0912 | 0.8036 | 0.8776 | 0.8390 | 0.9853 | | 0.0037 | 13.0 | 2197 | 0.0907 | 0.8323 | 0.8684 | 0.8500 | 0.9868 | | 0.0037 | 14.0 | 2366 | 0.0899 | 0.8192 | 0.8763 | 0.8468 | 0.9865 | | 0.0023 | 15.0 | 2535 | 0.0915 | 0.8273 | 0.8763 | 0.8511 | 0.9866 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2 ### Citation If you used the datasets and models in this repository, please cite it. ```bibtex @misc{https://doi.org/10.48550/arxiv.2302.09611, doi = {10.48550/ARXIV.2302.09611}, url = {https://arxiv.org/abs/2302.09611}, author = {Sartipi, Amir and Fatemi, Afsaneh}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English}, publisher = {arXiv}, year = {2023}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
Bagus/ser-japanese
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-14T13:10:25Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Bagus/wav2vec2-large-xlsr-bahasa-indonesia
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "el", "dataset:common_voice_id_6.1", "transformers", "audio", "speech", "bahasa-indonesia", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
2023-02-14T13:11:13Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-parsbert-uncased-conll2003 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-parsbert-uncased-conll2003 This model is a fine-tuned version of [HooshvareLab/bert-base-parsbert-uncased](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) on the [conll2003-persian](https://huggingface.co/datasets/Amir13/conll2003-persian ) dataset. It achieves the following results on the evaluation set: - Loss: 0.1631 - Precision: 0.8776 - Recall: 0.8898 - F1: 0.8836 - Accuracy: 0.9765 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 430 | 0.1063 | 0.8404 | 0.8476 | 0.8440 | 0.9696 | | 0.1854 | 2.0 | 860 | 0.0982 | 0.8694 | 0.8696 | 0.8695 | 0.9743 | | 0.0589 | 3.0 | 1290 | 0.1051 | 0.8649 | 0.8775 | 0.8712 | 0.9741 | | 0.0285 | 4.0 | 1720 | 0.1233 | 0.8700 | 0.8787 | 0.8743 | 0.9745 | | 0.0136 | 5.0 | 2150 | 0.1360 | 0.8700 | 0.8738 | 0.8719 | 0.9745 | | 0.0077 | 6.0 | 2580 | 0.1390 | 0.8785 | 0.8812 | 0.8799 | 0.9754 | | 0.0046 | 7.0 | 3010 | 0.1438 | 0.8803 | 0.8827 | 0.8815 | 0.9760 | | 0.0046 | 8.0 | 3440 | 0.1510 | 0.8763 | 0.8794 | 0.8779 | 0.9756 | | 0.0027 | 9.0 | 3870 | 0.1606 | 0.8798 | 0.8851 | 0.8824 | 0.9764 | | 0.0021 | 10.0 | 4300 | 0.1631 | 0.8776 | 0.8898 | 0.8836 | 0.9765 | | 0.0015 | 11.0 | 4730 | 0.1649 | 0.8782 | 0.8827 | 0.8804 | 0.9760 | | 0.001 | 12.0 | 5160 | 0.1646 | 0.8787 | 0.8829 | 0.8808 | 0.9761 | | 0.0008 | 13.0 | 5590 | 0.1686 | 0.8811 | 0.8846 | 0.8829 | 0.9765 | | 0.0006 | 14.0 | 6020 | 0.1714 | 0.8820 | 0.8831 | 0.8825 | 0.9765 | | 0.0006 | 15.0 | 6450 | 0.1706 | 0.8814 | 0.8838 | 0.8826 | 0.9764 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2 ### Citation If you used the datasets and models in this repository, please cite it. ```bibtex @misc{https://doi.org/10.48550/arxiv.2302.09611, doi = {10.48550/ARXIV.2302.09611}, url = {https://arxiv.org/abs/2302.09611}, author = {Sartipi, Amir and Fatemi, Afsaneh}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English}, publisher = {arXiv}, year = {2023}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
Bagus/wav2vec2-xlsr-japanese-speech-emotion-recognition
[ "pytorch", "wav2vec2", "audio-classification", "ja", "dataset:jtes", "transformers", "audio", "speech", "speech-emotion-recognition", "has_space" ]
audio-classification
{ "architectures": [ "HubertForSequenceClassification" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- license: mit tags: - generated_from_trainer datasets: Amir13/wnut2017-persian metrics: - precision - recall - f1 - accuracy model-index: - name: xlm-roberta-base-wnut2017 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-wnut2017 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [wnut2017-persian](https://huggingface.co/datasets/Amir13/wnut2017-persian) dataset. It achieves the following results on the evaluation set: - Loss: 0.2943 - Precision: 0.5430 - Recall: 0.4181 - F1: 0.4724 - Accuracy: 0.9379 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 106 | 0.3715 | 0.0667 | 0.0012 | 0.0024 | 0.9119 | | No log | 2.0 | 212 | 0.3279 | 0.3482 | 0.1783 | 0.2359 | 0.9217 | | No log | 3.0 | 318 | 0.3008 | 0.5574 | 0.3627 | 0.4394 | 0.9344 | | No log | 4.0 | 424 | 0.2884 | 0.5226 | 0.3614 | 0.4274 | 0.9363 | | 0.2149 | 5.0 | 530 | 0.2943 | 0.5430 | 0.4181 | 0.4724 | 0.9379 | | 0.2149 | 6.0 | 636 | 0.3180 | 0.5338 | 0.3711 | 0.4378 | 0.9377 | | 0.2149 | 7.0 | 742 | 0.3090 | 0.4993 | 0.4277 | 0.4607 | 0.9365 | | 0.2149 | 8.0 | 848 | 0.3300 | 0.5300 | 0.4048 | 0.4590 | 0.9380 | | 0.2149 | 9.0 | 954 | 0.3365 | 0.4938 | 0.3843 | 0.4322 | 0.9367 | | 0.0623 | 10.0 | 1060 | 0.3363 | 0.5028 | 0.4313 | 0.4643 | 0.9363 | | 0.0623 | 11.0 | 1166 | 0.3567 | 0.4992 | 0.3880 | 0.4366 | 0.9356 | | 0.0623 | 12.0 | 1272 | 0.3681 | 0.5164 | 0.3988 | 0.4500 | 0.9375 | | 0.0623 | 13.0 | 1378 | 0.3698 | 0.5086 | 0.3928 | 0.4432 | 0.9376 | | 0.0623 | 14.0 | 1484 | 0.3690 | 0.5157 | 0.4157 | 0.4603 | 0.9380 | | 0.0303 | 15.0 | 1590 | 0.3744 | 0.5045 | 0.4072 | 0.4507 | 0.9375 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2 ### Citation If you used the datasets and models in this repository, please cite it. ```bibtex @misc{https://doi.org/10.48550/arxiv.2302.09611, doi = {10.48550/ARXIV.2302.09611}, url = {https://arxiv.org/abs/2302.09611}, author = {Sartipi, Amir and Fatemi, Afsaneh}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English}, publisher = {arXiv}, year = {2023}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
Bala/model_name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-14T13:26:29Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed]
Banshee/dialoGPT-luke-small
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-pixelcopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 58.50 +/- 53.06 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Barkavi/totto-t5-base-bert-score-121K
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
51
null
--- language: - en - sp - ja - pe - hi - fr - ch - be - gu - ge - te - it - ar - po - ta - ma - ma - or - pa - po - ur - ga - he - ko - ca - th - du - in - vi - bu - fi - ce - la - tu - ru - cr - sw - yo - ku - bu - ma - cz - fi - so - ta - sw - si - ka - zh - ig - xh - ro - ha - es - sl - li - gr - ne - as - no widget: - text: "Translate to German: My name is Arthur" example_title: "Translation" - text: "Please answer to the following question. Who is going to be the next Ballon d'or?" example_title: "Question Answering" - text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering." example_title: "Logical reasoning" - text: "Please answer the following question. What is the boiling point of Nitrogen?" example_title: "Scientific knowledge" - text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?" example_title: "Yes/no question" - text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?" example_title: "Reasoning task" - text: "Q: ( False or not False or False ) is? A: Let's think step by step" example_title: "Boolean Expressions" - text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?" example_title: "Math reasoning" - text: "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?" example_title: "Premise and hypothesis" tags: - text2text-generation datasets: - svakulenk0/qrecc - taskmaster2 - djaym7/wiki_dialog - deepmind/code_contests - lambada - gsm8k - aqua_rat - esnli - quasc - qed - financial_phrasebank license: apache-2.0 --- # Model Card for LoRA-FLAN-T5 large ![model image](https://s3.amazonaws.com/moonup/production/uploads/1666363435475-62441d1d9fdefb55a0b7d12c.png) This repository contains the LoRA (Low Rank Adapters) of `flan-t5-large` that has been fine-tuned on [`financial_phrasebank`](https://huggingface.co/datasets/financial_phrasebank) dataset. ## Usage Use this adapter with `peft` library ```python # pip install peft transformers import torch from peft import PeftModel, PeftConfig from transformers import AutoModelForSeq2SeqLM, AutoTokenizer peft_model_id = "ybelkada/flan-t5-large-financial-phrasebank-lora" config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForSeq2SeqLM.from_pretrained( config.base_model_name_or_path, torch_dtype='auto', device_map='auto' ) tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) # Load the Lora model model = PeftModel.from_pretrained(model, peft_model_id) ``` Enjoy!
Barytes/hellohf
[ "tf", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: model2-thesis-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model2-thesis-2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2411 - Accuracy: 0.928 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 145 | 0.2504 | 0.916 | | No log | 2.0 | 290 | 0.2250 | 0.926 | | No log | 3.0 | 435 | 0.2411 | 0.928 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Batsy24/DialoGPT-small-Twilight_EdBot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Room-portraits Dreambooth model trained by rpip with [buildspace's DreamBooth](https://colab.research.google.com/github/buildspace/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) notebook Build your own using the [AI Avatar project](https://buildspace.so/builds/ai-avatar)! To get started head over to the [project dashboard](https://buildspace.so/p/build-ai-avatars). Sample pictures of this concept:
BatuhanYilmaz/bert-finetuned-ner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: - ncbi_disease metrics: - precision - recall - f1 - accuracy model-index: - name: xlm-roberta-base-ncbi_disease-en results: - task: name: Token Classification type: token-classification dataset: name: ncbi_disease type: ncbi_disease config: ncbi_disease split: validation args: ncbi_disease metrics: - name: Precision type: precision value: 0.8562421185372006 - name: Recall type: recall value: 0.8627700127064803 - name: F1 type: f1 value: 0.859493670886076 - name: Accuracy type: accuracy value: 0.9868991989319092 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-ncbi_disease-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [ncbi_disease](https://huggingface.co/datasets/ncbi_disease) dataset. It achieves the following results on the evaluation set: - Loss: 0.0496 - Precision: 0.8562 - Recall: 0.8628 - F1: 0.8595 - Accuracy: 0.9869 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 170 | 0.0555 | 0.7949 | 0.7980 | 0.7964 | 0.9833 | | No log | 2.0 | 340 | 0.0524 | 0.7404 | 0.8551 | 0.7936 | 0.9836 | | 0.0803 | 3.0 | 510 | 0.0484 | 0.7932 | 0.8869 | 0.8374 | 0.9849 | | 0.0803 | 4.0 | 680 | 0.0496 | 0.8562 | 0.8628 | 0.8595 | 0.9869 | | 0.0803 | 5.0 | 850 | 0.0562 | 0.7976 | 0.8615 | 0.8283 | 0.9848 | | 0.0152 | 6.0 | 1020 | 0.0606 | 0.8086 | 0.8856 | 0.8454 | 0.9846 | | 0.0152 | 7.0 | 1190 | 0.0709 | 0.8412 | 0.8412 | 0.8412 | 0.9866 | | 0.0152 | 8.0 | 1360 | 0.0735 | 0.8257 | 0.8666 | 0.8456 | 0.9843 | | 0.0059 | 9.0 | 1530 | 0.0730 | 0.8343 | 0.8767 | 0.8550 | 0.9866 | | 0.0059 | 10.0 | 1700 | 0.0855 | 0.8130 | 0.8895 | 0.8495 | 0.9843 | | 0.0059 | 11.0 | 1870 | 0.0868 | 0.8263 | 0.8767 | 0.8508 | 0.9860 | | 0.0026 | 12.0 | 2040 | 0.0862 | 0.8273 | 0.8767 | 0.8513 | 0.9858 | | 0.0026 | 13.0 | 2210 | 0.0875 | 0.8329 | 0.8806 | 0.8561 | 0.9859 | | 0.0026 | 14.0 | 2380 | 0.0889 | 0.8287 | 0.8793 | 0.8533 | 0.9859 | | 0.0013 | 15.0 | 2550 | 0.0884 | 0.8321 | 0.8755 | 0.8533 | 0.9861 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2 ### Citation If you used the datasets and models in this repository, please cite it. ```bibtex @misc{https://doi.org/10.48550/arxiv.2302.09611, doi = {10.48550/ARXIV.2302.09611}, url = {https://arxiv.org/abs/2302.09611}, author = {Sartipi, Amir and Fatemi, Afsaneh}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English}, publisher = {arXiv}, year = {2023}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
BatuhanYilmaz/bert-finetuned-nerxD
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: bdbybt --- ### bdbybt_puig Dreambooth model trained by jaimexv with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: bdbybt (use that on your prompt) ![bdbybt 0](https://huggingface.co/jaimexv/bdbybt-puig/resolve/main/concept_images/bdbybt_%281%29.jpg)![bdbybt 1](https://huggingface.co/jaimexv/bdbybt-puig/resolve/main/concept_images/bdbybt_%282%29.jpg)![bdbybt 2](https://huggingface.co/jaimexv/bdbybt-puig/resolve/main/concept_images/bdbybt_%283%29.jpg)![bdbybt 3](https://huggingface.co/jaimexv/bdbybt-puig/resolve/main/concept_images/bdbybt_%284%29.jpg)![bdbybt 4](https://huggingface.co/jaimexv/bdbybt-puig/resolve/main/concept_images/bdbybt_%285%29.jpg)![bdbybt 5](https://huggingface.co/jaimexv/bdbybt-puig/resolve/main/concept_images/bdbybt_%286%29.jpg)![bdbybt 6](https://huggingface.co/jaimexv/bdbybt-puig/resolve/main/concept_images/bdbybt_%287%29.jpg)![bdbybt 7](https://huggingface.co/jaimexv/bdbybt-puig/resolve/main/concept_images/bdbybt_%288%29.jpg)
BatuhanYilmaz/code-search-net-tokenizer1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: frangiral/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
BatuhanYilmaz/distilbert-base-uncased-finetuned-squad-d5716d28
[ "pytorch", "distilbert", "fill-mask", "en", "dataset:squad", "arxiv:1910.01108", "transformers", "question-answering", "license:apache-2.0", "autotrain_compatible" ]
question-answering
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
18
null
--- license: mit tags: - generated_from_trainer datasets: Amir13/conll2003-persian metrics: - precision - recall - f1 - accuracy model-index: - name: xlm-roberta-base-conll2003 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-conll2003 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [conll2003-persian](https://huggingface.co/datasets/Amir13/conll2003-persian ) dataset. It achieves the following results on the evaluation set: - Loss: 0.1579 - Precision: 0.8794 - Recall: 0.8745 - F1: 0.8769 - Accuracy: 0.9758 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 430 | 0.1374 | 0.8043 | 0.7966 | 0.8004 | 0.9613 | | 0.2862 | 2.0 | 860 | 0.1093 | 0.8384 | 0.8482 | 0.8433 | 0.9695 | | 0.1043 | 3.0 | 1290 | 0.1121 | 0.8448 | 0.8556 | 0.8502 | 0.9708 | | 0.0689 | 4.0 | 1720 | 0.1094 | 0.8635 | 0.8650 | 0.8643 | 0.9737 | | 0.0473 | 5.0 | 2150 | 0.1225 | 0.8665 | 0.8625 | 0.8645 | 0.9736 | | 0.0342 | 6.0 | 2580 | 0.1186 | 0.8722 | 0.8730 | 0.8726 | 0.9745 | | 0.0245 | 7.0 | 3010 | 0.1292 | 0.8802 | 0.8717 | 0.8759 | 0.9755 | | 0.0245 | 8.0 | 3440 | 0.1309 | 0.8832 | 0.8689 | 0.8760 | 0.9749 | | 0.0177 | 9.0 | 3870 | 0.1388 | 0.8712 | 0.8717 | 0.8715 | 0.9743 | | 0.0135 | 10.0 | 4300 | 0.1466 | 0.8699 | 0.8728 | 0.8714 | 0.9752 | | 0.0103 | 11.0 | 4730 | 0.1486 | 0.8716 | 0.8747 | 0.8731 | 0.9756 | | 0.0081 | 12.0 | 5160 | 0.1521 | 0.8789 | 0.8736 | 0.8762 | 0.9759 | | 0.007 | 13.0 | 5590 | 0.1546 | 0.8804 | 0.8734 | 0.8769 | 0.9756 | | 0.0053 | 14.0 | 6020 | 0.1552 | 0.8750 | 0.8732 | 0.8741 | 0.9756 | | 0.0053 | 15.0 | 6450 | 0.1579 | 0.8794 | 0.8745 | 0.8769 | 0.9758 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2 ### Citation If you used the datasets and models in this repository, please cite it. ```bibtex @misc{https://doi.org/10.48550/arxiv.2302.09611, doi = {10.48550/ARXIV.2302.09611}, url = {https://arxiv.org/abs/2302.09611}, author = {Sartipi, Amir and Fatemi, Afsaneh}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English}, publisher = {arXiv}, year = {2023}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
BatuhanYilmaz/dummy
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: - wnut_17 metrics: - precision - recall - f1 - accuracy model-index: - name: xlm-roberta-base-wnut2017-en results: - task: name: Token Classification type: token-classification dataset: name: wnut_17 type: wnut_17 config: wnut_17 split: validation args: wnut_17 metrics: - name: Precision type: precision value: 0.7219662058371735 - name: Recall type: recall value: 0.562200956937799 - name: F1 type: f1 value: 0.6321452589105581 - name: Accuracy type: accuracy value: 0.9589398080467807 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-wnut2017-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on [wnut_17](https://huggingface.co/datasets/wnut_17) dataset. It achieves the following results on the evaluation set: - Loss: 0.2319 - Precision: 0.7220 - Recall: 0.5622 - F1: 0.6321 - Accuracy: 0.9589 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 107 | 0.2789 | 0.4679 | 0.3397 | 0.3936 | 0.9408 | | No log | 2.0 | 214 | 0.2092 | 0.6875 | 0.5 | 0.5789 | 0.9518 | | No log | 3.0 | 321 | 0.1968 | 0.6194 | 0.5431 | 0.5787 | 0.9567 | | No log | 4.0 | 428 | 0.2172 | 0.7212 | 0.5383 | 0.6164 | 0.9586 | | 0.1523 | 5.0 | 535 | 0.2319 | 0.7220 | 0.5622 | 0.6321 | 0.9589 | | 0.1523 | 6.0 | 642 | 0.2023 | 0.6180 | 0.5514 | 0.5828 | 0.9577 | | 0.1523 | 7.0 | 749 | 0.2494 | 0.6895 | 0.5419 | 0.6068 | 0.9589 | | 0.1523 | 8.0 | 856 | 0.2844 | 0.7018 | 0.5263 | 0.6015 | 0.9578 | | 0.1523 | 9.0 | 963 | 0.2568 | 0.6808 | 0.5562 | 0.6122 | 0.9592 | | 0.0294 | 10.0 | 1070 | 0.2453 | 0.6718 | 0.5754 | 0.6198 | 0.9596 | | 0.0294 | 11.0 | 1177 | 0.2538 | 0.6933 | 0.5706 | 0.6260 | 0.9600 | | 0.0294 | 12.0 | 1284 | 0.2638 | 0.6865 | 0.5658 | 0.6203 | 0.9593 | | 0.0294 | 13.0 | 1391 | 0.2744 | 0.6764 | 0.5526 | 0.6083 | 0.9587 | | 0.0294 | 14.0 | 1498 | 0.2714 | 0.6812 | 0.5622 | 0.6160 | 0.9590 | | 0.0135 | 15.0 | 1605 | 0.2724 | 0.6830 | 0.5670 | 0.6196 | 0.9593 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2 ### Citation If you used the datasets and models in this repository, please cite it. ```bibtex @misc{https://doi.org/10.48550/arxiv.2302.09611, doi = {10.48550/ARXIV.2302.09611}, url = {https://arxiv.org/abs/2302.09611}, author = {Sartipi, Amir and Fatemi, Afsaneh}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English}, publisher = {arXiv}, year = {2023}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
BatuhanYilmaz/mlm-finetuned-imdb
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: Amir13/ontonotes5-persian metrics: - precision - recall - f1 - accuracy model-index: - name: xlm-roberta-base-ontonotesv5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-ontonotesv5 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [ontonotes5-persian](https://huggingface.co/datasets/Amir13/ontonotes5-persian) dataset. It achieves the following results on the evaluation set: - Loss: 0.1693 - Precision: 0.8336 - Recall: 0.8360 - F1: 0.8348 - Accuracy: 0.9753 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1145 | 1.0 | 2310 | 0.1174 | 0.7717 | 0.7950 | 0.7832 | 0.9697 | | 0.0793 | 2.0 | 4620 | 0.1084 | 0.8129 | 0.8108 | 0.8118 | 0.9729 | | 0.0627 | 3.0 | 6930 | 0.1078 | 0.8227 | 0.8102 | 0.8164 | 0.9735 | | 0.047 | 4.0 | 9240 | 0.1132 | 0.8105 | 0.8223 | 0.8164 | 0.9731 | | 0.0347 | 5.0 | 11550 | 0.1190 | 0.8185 | 0.8315 | 0.8250 | 0.9742 | | 0.0274 | 6.0 | 13860 | 0.1282 | 0.8088 | 0.8387 | 0.8235 | 0.9734 | | 0.0202 | 7.0 | 16170 | 0.1329 | 0.8219 | 0.8354 | 0.8286 | 0.9745 | | 0.0167 | 8.0 | 18480 | 0.1423 | 0.8147 | 0.8376 | 0.8260 | 0.9742 | | 0.0134 | 9.0 | 20790 | 0.1520 | 0.8259 | 0.8308 | 0.8284 | 0.9745 | | 0.0097 | 10.0 | 23100 | 0.1627 | 0.8226 | 0.8377 | 0.8300 | 0.9745 | | 0.0084 | 11.0 | 25410 | 0.1693 | 0.8336 | 0.8360 | 0.8348 | 0.9753 | | 0.0066 | 12.0 | 27720 | 0.1744 | 0.8317 | 0.8359 | 0.8338 | 0.9751 | | 0.0053 | 13.0 | 30030 | 0.1764 | 0.8247 | 0.8409 | 0.8327 | 0.9750 | | 0.004 | 14.0 | 32340 | 0.1797 | 0.8280 | 0.8378 | 0.8328 | 0.9751 | | 0.004 | 15.0 | 34650 | 0.1809 | 0.8310 | 0.8382 | 0.8346 | 0.9754 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2 ## Citation If you used the datasets and models in this repository, please cite it. ```bibtex @misc{https://doi.org/10.48550/arxiv.2302.09611, doi = {10.48550/ARXIV.2302.09611}, url = {https://arxiv.org/abs/2302.09611}, author = {Sartipi, Amir and Fatemi, Afsaneh}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English}, publisher = {arXiv}, year = {2023}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
Baybars/wav2vec2-xls-r-300m-cv8-turkish
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "tr", "dataset:common_voice", "transformers", "common_voice", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: mit tags: - generated_from_trainer datasets: - imdb model-index: - name: gpt2-xl-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-xl-imdb This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
BeIR/query-gen-msmarco-t5-base-v1
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
1,816
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: frangiral/ppo-Pyramids1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
BeIR/query-gen-msmarco-t5-large-v1
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
1,225
2023-02-14T14:29:47Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: tannonk/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Beatriz/model_name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.43 +/- 0.65 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Bee-Garbs/DialoGPT-cartman-small
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1159.33 +/- 399.23 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Begimay/Task
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - samsum language: - en --- https://medium.com/@ferlatti.aldo/fine-tuning-a-chat-summarizer-c18625bc817d Model based on pre-trained model facebook/bart-large-xsum. Was fine-tuned with dataset Samsum.
Bella4322/Sarah
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: openrail++ tags: - coreml - stable-diffusion - text-to-image --- # Core ML Converted Model This model was converted to Core ML for use on Apple devices by following Apple's instructions [here](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml). The model is provided for use on systems using the [Apple CoreML StableDiffusion](https://github.com/apple/ml-stable-diffusion) library/code. This model is the `split_einsum` version and should be compatible with all compute unit options including Neural Engine. It also supports the new image-2-image functionality and has the necessary Encoder bundled in and has been tested to work when providing an input image. # Stable Diffusion v2-1-base Model Card This model card focuses on the model associated with the Stable Diffusion v2-1-base model. This `stable-diffusion-2-1-base` model fine-tunes [stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) (`512-base-ema.ckpt`) with 220k extra steps taken, with `punsafe=0.98` on the same dataset. - Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `v2-1_512-ema-pruned.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.ckpt). - Use it with 🧨 [`diffusers`](#examples) ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL) - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)). - **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## Examples Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner. ```bash pip install diffusers transformers accelerate scipy safetensors ``` Running the pipeline (if you don't swap the scheduler it will run with the default PNDM/PLMS scheduler, in this example we are swapping it to EulerDiscreteScheduler): ```python from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler import torch model_id = "stabilityai/stable-diffusion-2-1-base" scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler") pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` **Notes**: - Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance) - If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed) # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a subset of the large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section). ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion vw was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic. **Training Procedure** Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through the OpenCLIP-ViT/H text-encoder. - The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512. We currently provide the following checkpoints, for various versions: ### Version 2.1 - `512-base-ema.ckpt`: Fine-tuned on `512-base-ema.ckpt` 2.0 with 220k extra steps taken, with `punsafe=0.98` on the same dataset. - `768-v-ema.ckpt`: Resumed from `768-v-ema.ckpt` 2.0 with an additional 55k steps on the same dataset (`punsafe=0.1`), and then fine-tuned for another 155k extra steps with `punsafe=0.98`. ### Version 2.0 - `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`. 850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`. - `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset. - `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. - `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://github.com/saic-mdal/lama). - `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752). In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml). - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 1 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints: ![pareto](https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/model-variants.jpg) Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 200000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq. ## Citation @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } *This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
BenDavis71/GPT-2-Finetuning-AIRaid
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
The dataset was obtained from The code uses bottle. https://www.mvtec.com/company/research/datasets/mvtec-ad 1. train.py is the code that creates the model. 2. generate_image.py is the code to generate a normal image from an abnormal image. 3. predict.py is the code that generates the heatmap image of the anomalous area. The bottle dataset should be placed on the same level as the code. ## sample image ![pic1.jpg](pic1.jpg)
Bharathdamu/wav2vec2-large-xls-r-300m-hindi-colab
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.15 +/- 0.31 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Bharathdamu/wav2vec2-model-hindibhasha
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -2.32 +/- 1.24 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Biasface/DDDC
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- tags: - autotrain - vision - image-classification datasets: - 1024khandsom/autotrain-data-ant-bee widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace co2_eq_emissions: emissions: 0.7388274047348641 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 3482194557 - CO2 Emissions (in grams): 0.7388 ## Validation Metrics - Loss: 0.013 - Accuracy: 1.000 - Precision: 1.000 - Recall: 1.000 - AUC: 1.000 - F1: 1.000
BigSalmon/MrLincoln13
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # ./output/ This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("./output/") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
BigTooth/DialoGPT-Megumin
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: Amiko/PyramidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Binbin/test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - Abirate/english_quotes language: - en pipeline_tag: text-generation --- A simple adapter trained on english quotes, using the brand new PEFT library.
Blackmist786/DialoGPt-small-transformers4
[ "pytorch" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: swin-tiny-patch4-window7-224-finetuned-algae-wirs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-algae-wirs This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.9582 - eval_accuracy: 0.6227 - eval_runtime: 10.6179 - eval_samples_per_second: 160.483 - eval_steps_per_second: 5.086 - epoch: 27.8 - step: 3336 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
BobBraico/distilbert-base-uncased-finetuned-imdb-accelerate
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- # [Visual Product Recognition Challenge](https://www.aicrowd.com/challenges/visual-product-recognition-challenge-2023) The trained models for the competition. The training code for the models can be found in [HCA97/Product-Recognition](https://github.com/HCA97/Product-Recognition). # How to use it? You need to install `open_clip` library. ```bash pip install open_clip ``` Example of loading the model: ```py model = open_clip.create_model_and_transforms('ViT-H-14', None)[0].visual model.load_state_dict(th.load('path to model')) model.half() model.eval() ```
BogdanKuloren/continual-learning-paper-embeddings-model
[ "pytorch", "mpnet", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "MPNetModel" ], "model_type": "mpnet", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
Access to model saburbutt/IndicBART-finetuned-Kannada-to-Malayalam is restricted and you are not in the authorized list. Visit https://huggingface.co/saburbutt/IndicBART-finetuned-Kannada-to-Malayalam to ask for access.
BonjinKim/dst_kor_bert
[ "pytorch", "jax", "bert", "pretraining", "transformers" ]
null
{ "architectures": [ "BertForPreTraining" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="enlacinglines/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Boondong/Wandee
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="tvarella/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BossLee/t5-gec
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
6
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxitaxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="tvarella/taxitaxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Botjallu/DialoGPT-small-harrypotter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-50kepisodes results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="enlacinglines/Taxi-v3-50kepisodes", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BotterHax/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### SupeGEN Dreambooth model trained by jetpackjules with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
Branex/gpt-neo-2.7B
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="KubiakJakub01/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Brayan/CNN_Brain_Tumor
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
BrianTin/MTBERT
[ "pytorch", "jax", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Nonin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Brinah/1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: diffusers base_model: CompVis/stable-diffusion-v1-4 pipeline_tag: text-to-image --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [Mobius Labs] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed]
Broadus20/DialoGPT-small-joshua
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi3_V1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Nonin/Taxi3_V1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Brokette/projetCS
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: openai/whisper-medium results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_myst type: rishabhjain16/infer_myst config: en split: test metrics: - type: wer value: 12.14 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pfs type: rishabhjain16/infer_pfs config: en split: test metrics: - type: wer value: 41.83 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_cmu type: rishabhjain16/infer_cmu config: en split: test metrics: - type: wer value: 4.46 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pf_italian type: rishabhjain16/infer_pf_italian config: en split: test metrics: - type: wer value: 125.05 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pf_german type: rishabhjain16/infer_pf_german config: en split: test metrics: - type: wer value: 113.07 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pf_swedish type: rishabhjain16/infer_pf_swedish config: en split: test metrics: - type: wer value: 158.75 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_so_chinese type: rishabhjain16/infer_so_chinese config: en split: test metrics: - type: wer value: 33.24 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/libritts_dev_clean type: rishabhjain16/libritts_dev_clean config: en split: test metrics: - type: wer value: 6.1 name: WER --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openai/whisper-medium This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3246 - Wer: 341.9230 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2032 | 0.12 | 500 | 0.2243 | 493.2750 | | 0.1192 | 1.1 | 1000 | 0.2127 | 424.6297 | | 0.1109 | 2.08 | 1500 | 0.2237 | 351.5590 | | 0.042 | 3.06 | 2000 | 0.2460 | 165.9201 | | 0.0262 | 4.04 | 2500 | 0.2909 | 231.2864 | | 0.0139 | 5.02 | 3000 | 0.3042 | 350.0223 | | 0.0084 | 6.0 | 3500 | 0.3247 | 327.0151 | | 0.0023 | 6.13 | 4000 | 0.3246 | 341.9230 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.9.1.dev0 - Tokenizers 0.13.2
BrunoNogueira/DialoGPT-kungfupanda
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - sroie metrics: - precision - recall - f1 - accuracy model-index: - name: sogemi_ddt_1.0 results: - task: name: Token Classification type: token-classification dataset: name: sroie type: sroie config: discharge split: test args: discharge metrics: - name: Precision type: precision value: 0.9442896935933147 - name: Recall type: recall value: 0.9713467048710601 - name: F1 type: f1 value: 0.9576271186440677 - name: Accuracy type: accuracy value: 0.9926639156350298 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sogemi_ddt_1.0 This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the sroie dataset. It achieves the following results on the evaluation set: - Loss: 0.0375 - Precision: 0.9443 - Recall: 0.9713 - F1: 0.9576 - Accuracy: 0.9927 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.35 | 100 | 0.6022 | 0.2061 | 0.0774 | 0.1125 | 0.8634 | | No log | 2.7 | 200 | 0.3445 | 0.4627 | 0.3381 | 0.3907 | 0.9051 | | No log | 4.05 | 300 | 0.1820 | 0.7524 | 0.6619 | 0.7043 | 0.9583 | | No log | 5.41 | 400 | 0.1141 | 0.8742 | 0.8166 | 0.8444 | 0.9812 | | 0.3555 | 6.76 | 500 | 0.0719 | 0.9229 | 0.9255 | 0.9242 | 0.9867 | | 0.3555 | 8.11 | 600 | 0.0526 | 0.9202 | 0.9255 | 0.9229 | 0.9881 | | 0.3555 | 9.46 | 700 | 0.0531 | 0.9197 | 0.9513 | 0.9352 | 0.9862 | | 0.3555 | 10.81 | 800 | 0.0454 | 0.9167 | 0.9140 | 0.9154 | 0.9872 | | 0.3555 | 12.16 | 900 | 0.0447 | 0.9284 | 0.9284 | 0.9284 | 0.9895 | | 0.0479 | 13.51 | 1000 | 0.0436 | 0.9370 | 0.9370 | 0.9370 | 0.9872 | | 0.0479 | 14.86 | 1100 | 0.0383 | 0.9385 | 0.9628 | 0.9505 | 0.9913 | | 0.0479 | 16.22 | 1200 | 0.0389 | 0.9468 | 0.9685 | 0.9575 | 0.9908 | | 0.0479 | 17.57 | 1300 | 0.0349 | 0.9743 | 0.9771 | 0.9757 | 0.9945 | | 0.0479 | 18.92 | 1400 | 0.0329 | 0.9885 | 0.9857 | 0.9871 | 0.9954 | | 0.0244 | 20.27 | 1500 | 0.0380 | 0.9412 | 0.9628 | 0.9518 | 0.9917 | | 0.0244 | 21.62 | 1600 | 0.0447 | 0.8917 | 0.9198 | 0.9055 | 0.9853 | | 0.0244 | 22.97 | 1700 | 0.0434 | 0.9148 | 0.9542 | 0.9341 | 0.9876 | | 0.0244 | 24.32 | 1800 | 0.0444 | 0.9280 | 0.9599 | 0.9437 | 0.9890 | | 0.0244 | 25.68 | 1900 | 0.0386 | 0.9361 | 0.9656 | 0.9506 | 0.9913 | | 0.015 | 27.03 | 2000 | 0.0381 | 0.9415 | 0.9685 | 0.9548 | 0.9917 | | 0.015 | 28.38 | 2100 | 0.0341 | 0.9577 | 0.9742 | 0.9659 | 0.9936 | | 0.015 | 29.73 | 2200 | 0.0340 | 0.9715 | 0.9771 | 0.9743 | 0.9945 | | 0.015 | 31.08 | 2300 | 0.0365 | 0.9493 | 0.9656 | 0.9574 | 0.9931 | | 0.015 | 32.43 | 2400 | 0.0398 | 0.9339 | 0.9713 | 0.9522 | 0.9913 | | 0.0123 | 33.78 | 2500 | 0.0375 | 0.9443 | 0.9713 | 0.9576 | 0.9927 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.2.2 - Tokenizers 0.13.2
Bryanwong/wangchanberta-ner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - amazon_polarity metrics: - accuracy - f1 model-index: - name: design-amazon results: - task: name: Text Classification type: text-classification dataset: name: amazon_polarity type: amazon_polarity config: amazon_polarity split: test args: amazon_polarity metrics: - name: Accuracy type: accuracy value: 0.9166666666666666 - name: F1 type: f1 value: 0.9180327868852459 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # design-amazon This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_polarity dataset. It achieves the following results on the evaluation set: - Loss: 0.2297 - Accuracy: 0.9167 - F1: 0.9180 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Brykee/BrykeeBot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: GrimReaperSam/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Brykee/DialoGPT-medium-Morty
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- tags: - generated_from_trainer metrics: - bleu model-index: - name: t5-small-ENG2BASH-custom-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-ENG2BASH-custom-v2 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2653 - Bleu: 76.4656 - Gen Len: 16.92 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 40 | 0.3151 | 69.2544 | 17.36 | | No log | 2.0 | 80 | 0.3087 | 70.4152 | 17.08 | | No log | 3.0 | 120 | 0.2938 | 75.3344 | 16.92 | | No log | 4.0 | 160 | 0.2671 | 74.809 | 16.92 | | No log | 5.0 | 200 | 0.2653 | 76.4656 | 16.92 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1 - Datasets 2.9.0 - Tokenizers 0.13.2
Bryson575x/riceboi
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-base-patch16-224-finetuned-algae-wirs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-algae-wirs This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9663 - Accuracy: 0.6021 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0733 | 1.0 | 120 | 1.0611 | 0.5781 | | 1.0243 | 2.0 | 240 | 1.0628 | 0.5663 | | 0.9852 | 3.0 | 360 | 1.0083 | 0.5845 | | 0.94 | 4.0 | 480 | 1.0005 | 0.5933 | | 0.9744 | 5.0 | 600 | 1.0102 | 0.5786 | | 0.9623 | 6.0 | 720 | 0.9840 | 0.5763 | | 0.9021 | 7.0 | 840 | 0.9869 | 0.5798 | | 0.9181 | 8.0 | 960 | 0.9755 | 0.5827 | | 0.8774 | 9.0 | 1080 | 0.9808 | 0.5798 | | 0.8294 | 10.0 | 1200 | 0.9663 | 0.6021 | | 0.8015 | 11.0 | 1320 | 0.9739 | 0.5980 | | 0.8063 | 12.0 | 1440 | 0.9811 | 0.6009 | | 0.7857 | 13.0 | 1560 | 0.9833 | 0.5933 | | 0.7085 | 14.0 | 1680 | 0.9887 | 0.5998 | | 0.7414 | 15.0 | 1800 | 0.9928 | 0.5974 | | 0.7442 | 16.0 | 1920 | 0.9963 | 0.5992 | | 0.7142 | 17.0 | 2040 | 1.0041 | 0.6004 | | 0.7488 | 18.0 | 2160 | 1.0034 | 0.5962 | | 0.6731 | 19.0 | 2280 | 1.0055 | 0.6021 | | 0.6905 | 20.0 | 2400 | 1.0033 | 0.6009 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2