modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Cameron/BERT-mdgender-wizard
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### arki-20230315-2300-analog-5000-steps on Stable Diffusion via Dreambooth
#### model by NickKolok
This your the Stable Diffusion model fine-tuned the arki-20230315-2300-analog-5000-steps concept taught to Stable Diffusion with Dreambooth.
#It can be used by modifying the `instance_prompt`: **arki**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
|
Carlork314/Carlos
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-03-15T21:48:58Z |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:CondSCVI
- scvi_version:0.20.0b1
- anndata_version:0.8.0
- modality:rna
- tissue:Trachea
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 5,
"n_layers": 2,
"weight_obs": false,
"dropout_rate": 0.05
}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"layer": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|------------------|-------|
| n_cells | 5112 |
| n_labels | 18 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|--------------|---------------------------|
| X | adata.X |
| labels | adata.obs['_scvi_labels'] |
**model_parent_module**: scvi.model
**data_is_minified**: False
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Trachea_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
dccuchile/albert-large-spanish-finetuned-xnli
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 29 | null |
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-de](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-de): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-de-trimmed-de-15000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-de](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-de) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-de | vocabtrimmer/xlm-roberta-base-tweet-sentiment-de-trimmed-de-15000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 97,565,955 |
| parameter_size_embedding | 192,001,536 | 11,521,536 |
| vocab_size | 250,002 | 15,002 |
| compression_rate_full | 100.0 | 35.09 |
| compression_rate_embedding | 100.0 | 6.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| de | vocabtrimmer/mc4_validation | text | de | validation | 15000 | 2 |
|
dccuchile/albert-tiny-spanish-finetuned-mldoc
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 32 | null |
---
tags:
- generated_from_keras_callback
model-index:
- name: skillsBERT_v1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# skillsBERT_v1
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0033
- Validation Loss: 0.1292
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1396 | 0.1406 | 0 |
| 0.1272 | 0.1316 | 1 |
| 0.1121 | 0.1134 | 2 |
| 0.0966 | 0.1054 | 3 |
| 0.0863 | 0.1001 | 4 |
| 0.0785 | 0.0986 | 5 |
| 0.0723 | 0.0982 | 6 |
| 0.0665 | 0.0962 | 7 |
| 0.0611 | 0.0959 | 8 |
| 0.0560 | 0.0946 | 9 |
| 0.0509 | 0.0965 | 10 |
| 0.0459 | 0.0959 | 11 |
| 0.0411 | 0.0986 | 12 |
| 0.0364 | 0.0997 | 13 |
| 0.0320 | 0.1045 | 14 |
| 0.0278 | 0.1061 | 15 |
| 0.0240 | 0.1069 | 16 |
| 0.0204 | 0.1056 | 17 |
| 0.0174 | 0.1094 | 18 |
| 0.0146 | 0.1120 | 19 |
| 0.0122 | 0.1116 | 20 |
| 0.0102 | 0.1195 | 21 |
| 0.0085 | 0.1199 | 22 |
| 0.0071 | 0.1210 | 23 |
| 0.0061 | 0.1206 | 24 |
| 0.0052 | 0.1225 | 25 |
| 0.0046 | 0.1246 | 26 |
| 0.0040 | 0.1266 | 27 |
| 0.0036 | 0.1241 | 28 |
| 0.0033 | 0.1292 | 29 |
### Framework versions
- Transformers 4.28.0.dev0
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
dccuchile/albert-tiny-spanish-finetuned-pawsx
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 29 | null |
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-de](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-de): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-de-trimmed-de-30000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-de](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-de) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-de | vocabtrimmer/xlm-roberta-base-tweet-sentiment-de-trimmed-de-30000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 109,085,955 |
| parameter_size_embedding | 192,001,536 | 23,041,536 |
| vocab_size | 250,002 | 30,002 |
| compression_rate_full | 100.0 | 39.23 |
| compression_rate_embedding | 100.0 | 12.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| de | vocabtrimmer/mc4_validation | text | de | validation | 30000 | 2 |
|
dccuchile/albert-tiny-spanish-finetuned-qa-mlqa
|
[
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |

---
picture: https://en.wikipedia.org/wiki/Myers%E2%80%93Briggs_Type_Indicator
license: mit
language:
- en
metrics:
- bertscore
pipeline_tag: text-classification
library_name: transformers
---
|
dccuchile/albert-xlarge-spanish-finetuned-mldoc
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26 | null |
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-de](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-de): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-de-trimmed-de-60000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-de](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-de) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-de | vocabtrimmer/xlm-roberta-base-tweet-sentiment-de-trimmed-de-60000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 132,125,955 |
| parameter_size_embedding | 192,001,536 | 46,081,536 |
| vocab_size | 250,002 | 60,002 |
| compression_rate_full | 100.0 | 47.52 |
| compression_rate_embedding | 100.0 | 24.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| de | vocabtrimmer/mc4_validation | text | de | validation | 60000 | 2 |
|
dccuchile/albert-xlarge-spanish-finetuned-pawsx
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 24 | null |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 8.07 +/- 2.93
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r phonenix/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
dccuchile/albert-xlarge-spanish-finetuned-pos
|
[
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-amaury
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-amaury
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 235 | 3.8499 |
| No log | 2.0 | 470 | 3.8158 |
| 3.475 | 3.0 | 705 | 3.8082 |
| 3.475 | 4.0 | 940 | 3.8136 |
| 3.1575 | 4.26 | 1000 | 3.8137 |
### Framework versions
- Transformers 4.27.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
dccuchile/albert-xxlarge-spanish-finetuned-mldoc
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26 | 2023-03-15T22:35:32Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 9.40 +/- 0.49
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
dccuchile/albert-large-spanish
|
[
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null |
{
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 75 | null |
---
language:
- en
tags:
- NLG
- pytorch
- transformers
- BART
- Graph-to-Text
- Knowledge Graph
license: apache-2.0
datasets:
- WebNLG
- EventNarrative
---
# Model Description
We release our best performing models for the WebNLG and EventNarrative datasets from the paper GAP: *A Graph-aware Language Model Framework for
Knowledge Graph-to-Text Generation*. Our model is intended to be used on knowledge graphs in order to narrate their contents, giving a verbalization of the structured data.
# Paper
Please see our paper [here](https://arxiv.org/abs/2204.06674).
# Citation
If you found this model useful, please consider citing our paper:
```
@inproceedings{colas-etal-2022-gap,
title = "{GAP}: A Graph-aware Language Model Framework for Knowledge Graph-to-Text Generation",
author = "Colas, Anthony and
Alvandipour, Mehrdad and
Wang, Daisy Zhe",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.506",
pages = "5755--5769"
}
```
# GitHub repo
Please see our GitHub [here](https://github.com/acolas1/GAP_COLING2022).
|
dccuchile/albert-xlarge-spanish
|
[
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null |
{
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 91 | 2023-03-15T22:57:46Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-faustimer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-faustimer
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 361 | 3.9761 |
| 4.0845 | 2.0 | 722 | 3.8678 |
| 3.7953 | 2.77 | 1000 | 3.8445 |
### Framework versions
- Transformers 4.27.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
dccuchile/albert-xxlarge-spanish
|
[
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null |
{
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 42 | null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.64 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-pawsx
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 25 | null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 493.90 +/- 18.30
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-pos
|
[
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.83 +/- 9.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-xnli
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 28 | null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-mldoc
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 39 | null |
---
license: other
---
# llama-30b-int4
**THIS MODEL IS NOW ARCHIVED AND WILL NO LONGER BE UPDATED**
If you wish to still use llama-30b there are plenty of repos/torrents with the updated weights.
This has been converted to int4 via GPTQ method. See the repo below for more info.
https://github.com/qwopqwop200/GPTQ-for-LLaMa
# Usage
1. Run manually through GPTQ
2. (More setup but better UI) - Use the [text-generation-webui](https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#4-bit-mode). Make sure to follow the installation steps first [here](https://github.com/oobabooga/text-generation-webui#installation) before adding GPTQ support.
**Note that a recent code change in GPTQ broke functionality for GPTQ in general, so please follow [these instructions](https://huggingface.co/elinas/alpaca-30b-lora-int4/discussions/2#641a38d5f1ad1c1173d8f192) to fix the issue!**
Since this is instruction tuned, for best results, use the following format for inference:
```
### Instruction:
your-prompt
### Response:
```
If you want deterministic results, turn off sampling. You can turn it off in the webui by unchecking `do_sample`.
For cai-chat mode, you won't want to use instruction prompting, rather create a character and set sampler settings. Here is an example of settings that work well for me:
```
do_sample=True
temperature=0.95
top_p=1
typical_p=1
repetition_penalty=1.1
top_k=40
num_beams=1
penalty_alpha=0
min_length=0
length_penalty=1
no_repeat_ngram_size=0
early_stopping=False
```
You can then save this as a `.txt` file in the `presets` folder.
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-pos
|
[
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: hmatzner/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dccuchile/distilbert-base-spanish-uncased-finetuned-ner
|
[
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 28 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetunned-documents
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetunned-documents
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1780
- Accuracy: 0.9787
- F1: 0.9786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 12 | 0.2736 | 0.9574 | 0.9564 |
| No log | 2.0 | 24 | 0.1780 | 0.9787 | 0.9786 |
### Framework versions
- Transformers 4.27.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
dccuchile/distilbert-base-spanish-uncased-finetuned-pos
|
[
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: Schwarzschild009/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate
|
[
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: ja
datasets:
- lmqg/qg_jaquad
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: "ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている<hl>6月28日<hl>は2人の14回目の結婚記念日であった。"
example_title: "Question Generation Example 1"
- text: "『クマのプーさん』の物語はまず1925年12月24日、『イヴニング・ニュース』紙のクリスマス特集号に短編作品として掲載された。これは『クマのプーさん』の第一章にあたる作品で、このときだけは挿絵をJ.H.ダウドがつけている。その後作品10話と挿絵が整い、刊行に先駆けて「イーヨーの誕生日」のエピソードが1926年8月に『ロイヤルマガジン』に、同年10月9日に『ニューヨーク・イヴニング・ポスト』紙に掲載されたあと、同年10月14日にロンドンで(メシュエン社)、21日にニューヨークで(ダットン社)『クマのプーさん』が刊行された。前著『ぼくたちがとてもちいさかったころ』がすでに大きな成功を収めていたこともあり、イギリスでは初版は前著の7倍に当たる<hl>3万5000部<hl>が刷られた。他方のアメリカでもその年の終わりまでに15万部を売り上げている。ただし依然として人気のあった前著を売り上げで追い越すには数年の時間を要した。"
example_title: "Question Generation Example 2"
- text: "フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め<hl>30数点<hl>しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。以下には若干の疑問作も含め、37点の基本情報を記載し、各作品について略説する。収録順序、推定制作年代は『「フェルメールとその時代展」図録』による。日本語の作品タイトルについては、上掲図録のほか、『「フェルメール展」図録』、『フェルメール生涯と作品』による。便宜上「1650年代の作品」「1660年代の作品」「1670年代の作品」の3つの節を設けたが、フェルメールの作品には制作年代不明のものが多く、推定制作年代については研究者や文献によって若干の差がある。"
example_title: "Question Generation Example 3"
model-index:
- name: vocabtrimmer/mt5-small-trimmed-ja-30000-jaquad-qg
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_jaquad
type: default
args: default
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 27.98
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 48.26
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 27.25
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 79.93
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 57.76
---
# Model Card of `vocabtrimmer/mt5-small-trimmed-ja-30000-jaquad-qg`
This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-ja-30000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ja-30000) for question generation task on the [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [vocabtrimmer/mt5-small-trimmed-ja-30000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ja-30000)
- **Language:** ja
- **Training data:** [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="ja", model="vocabtrimmer/mt5-small-trimmed-ja-30000-jaquad-qg")
# model prediction
questions = model.generate_q(list_context="フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め30数点しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。", list_answer="30数点")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-ja-30000-jaquad-qg")
output = pipe("ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている<hl>6月28日<hl>は2人の14回目の結婚記念日であった。")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ja-30000-jaquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_jaquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 79.93 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_1 | 53.31 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_2 | 41.29 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_3 | 33.55 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_4 | 27.98 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| METEOR | 27.25 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| MoverScore | 57.76 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| ROUGE_L | 48.26 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_jaquad
- dataset_name: default
- input_types: paragraph_answer
- output_types: question
- prefix_types: None
- model: vocabtrimmer/mt5-small-trimmed-ja-30000
- max_length: 512
- max_length_output: 32
- epoch: 15
- batch: 16
- lr: 0.001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ja-30000-jaquad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
CennetOguz/distilbert-base-uncased-finetuned-recipe
|
[
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: kucharskipj/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Certified-Zoomer/DialoGPT-small-rick
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-03-16T00:24:45Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1152316642140909568/OTa1ez0X_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">James Currier</div>
<div style="text-align: center; font-size: 14px;">@jamescurrier</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from James Currier.
| Data | James Currier |
| --- | --- |
| Tweets downloaded | 1888 |
| Retweets | 551 |
| Short tweets | 26 |
| Tweets kept | 1311 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/rrgg7hbd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jamescurrier's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/r7kvi7im) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/r7kvi7im/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jamescurrier')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Chaddmckay/Cdm
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-Masked_Language_Model-US_Economic_News_Articles
results: []
language:
- en
metrics:
- perplexity
---
# bert-base-uncased-Masked_Language_Model-US_Economic_News_Articles
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased).
It achieves the following results on the evaluation set:
- Loss: 1.8322
## Model description
This is a masked language modeling project.
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Masked%20Language%20Model/US%20Economic%20News%20Articles/US_Economic_News_Articles_MLM.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/trikialaaa/2k-clean-medical-articles-medicalnewstoday
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1833 | 1.0 | 2016 | 1.9529 |
| 2.004 | 2.0 | 4032 | 1.9002 |
| 1.941 | 3.0 | 6048 | 1.8600 |
Perplexity: 6.25
### Framework versions
- Transformers 4.27.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Chae/botman
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1080.16 +/- 206.25
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Chaewon/mnmt_decoder_en
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: ja
datasets:
- lmqg/qg_jaquad
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: "ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている<hl>6月28日<hl>は2人の14回目の結婚記念日であった。"
example_title: "Question Generation Example 1"
- text: "『クマのプーさん』の物語はまず1925年12月24日、『イヴニング・ニュース』紙のクリスマス特集号に短編作品として掲載された。これは『クマのプーさん』の第一章にあたる作品で、このときだけは挿絵をJ.H.ダウドがつけている。その後作品10話と挿絵が整い、刊行に先駆けて「イーヨーの誕生日」のエピソードが1926年8月に『ロイヤルマガジン』に、同年10月9日に『ニューヨーク・イヴニング・ポスト』紙に掲載されたあと、同年10月14日にロンドンで(メシュエン社)、21日にニューヨークで(ダットン社)『クマのプーさん』が刊行された。前著『ぼくたちがとてもちいさかったころ』がすでに大きな成功を収めていたこともあり、イギリスでは初版は前著の7倍に当たる<hl>3万5000部<hl>が刷られた。他方のアメリカでもその年の終わりまでに15万部を売り上げている。ただし依然として人気のあった前著を売り上げで追い越すには数年の時間を要した。"
example_title: "Question Generation Example 2"
- text: "フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め<hl>30数点<hl>しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。以下には若干の疑問作も含め、37点の基本情報を記載し、各作品について略説する。収録順序、推定制作年代は『「フェルメールとその時代展」図録』による。日本語の作品タイトルについては、上掲図録のほか、『「フェルメール展」図録』、『フェルメール生涯と作品』による。便宜上「1650年代の作品」「1660年代の作品」「1670年代の作品」の3つの節を設けたが、フェルメールの作品には制作年代不明のものが多く、推定制作年代については研究者や文献によって若干の差がある。"
example_title: "Question Generation Example 3"
model-index:
- name: vocabtrimmer/mt5-small-trimmed-ja-10000-jaquad-qg
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_jaquad
type: default
args: default
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 26.31
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 46.88
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 26.97
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 79.24
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 57.15
---
# Model Card of `vocabtrimmer/mt5-small-trimmed-ja-10000-jaquad-qg`
This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-ja-10000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ja-10000) for question generation task on the [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [vocabtrimmer/mt5-small-trimmed-ja-10000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ja-10000)
- **Language:** ja
- **Training data:** [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="ja", model="vocabtrimmer/mt5-small-trimmed-ja-10000-jaquad-qg")
# model prediction
questions = model.generate_q(list_context="フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め30数点しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。", list_answer="30数点")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-ja-10000-jaquad-qg")
output = pipe("ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている<hl>6月28日<hl>は2人の14回目の結婚記念日であった。")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ja-10000-jaquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_jaquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 79.24 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_1 | 52.54 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_2 | 40.01 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_3 | 32.01 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_4 | 26.31 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| METEOR | 26.97 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| MoverScore | 57.15 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| ROUGE_L | 46.88 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_jaquad
- dataset_name: default
- input_types: paragraph_answer
- output_types: question
- prefix_types: None
- model: vocabtrimmer/mt5-small-trimmed-ja-10000
- max_length: 512
- max_length_output: 32
- epoch: 17
- batch: 16
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ja-10000-jaquad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
Chaewon/mnmt_decoder_en_gpt2
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 43.35 +/- 37.49
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Chakita/KROBERT
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"masked-lm",
"fill-in-the-blanks",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
tags:
- generated_from_trainer
model-index:
- name: wmt22_en_pt_br
results: []
metrics:
- bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wmt22_en_pt_br
This model is a fine-tuned version of [unicamp-dl/translation-en-pt-t5](https://huggingface.co/unicamp-dl/translation-en-pt-t5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Chakita/Kalbert
|
[
"pytorch",
"tensorboard",
"albert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 36.30 +/- 27.33
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Chandanbhat/distilbert-base-uncased-finetuned-cola
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1073307258052739072/xzsY47Aq_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">itswill</div>
<div style="text-align: center; font-size: 14px;">@williesuede</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from itswill.
| Data | itswill |
| --- | --- |
| Tweets downloaded | 1665 |
| Retweets | 160 |
| Short tweets | 237 |
| Tweets kept | 1268 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/hf9jlvpn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @williesuede's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/afkaf93u) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/afkaf93u/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/williesuede')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Charlotte/text2dm_models
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- text-classification
- endpoints-template
- optimum
library_name: generic
---
# Optimized and Quantized DistilBERT with a custom pipeline with handler.py
> NOTE: Blog post coming soon
This is a template repository for Text Classification using Optimum and onnxruntime to support generic inference with Hugging Face Hub generic Inference API. There are two required steps:
1. Specify the requirements by defining a `requirements.txt` file.
2. Implement the `handler.py` `__init__` and `__call__` methods. These methods are called by the Inference API. The `__init__` method should load the model and preload the optimum model and tokenizers as well as the `text-classification` pipeline needed for inference. This is only called once. The `__call__` method performs the actual inference. Make sure to follow the same input/output specifications defined in the template for the pipeline to work.
add
```
library_name: generic
```
to the readme.
_note: the `generic` community image currently only support `inputs` as parameter and no parameter._
|
Cheapestmedsshop/Buymodafinilus
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- generated_from_keras_callback
model-index:
- name: skillsBERT_v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# skillsBERT_v2
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0034
- Validation Loss: 0.1296
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamW', 'weight_decay': 0.004, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1397 | 0.1414 | 0 |
| 0.1276 | 0.1318 | 1 |
| 0.1132 | 0.1147 | 2 |
| 0.0977 | 0.1059 | 3 |
| 0.0868 | 0.0998 | 4 |
| 0.0789 | 0.0974 | 5 |
| 0.0726 | 0.0978 | 6 |
| 0.0668 | 0.0958 | 7 |
| 0.0615 | 0.0963 | 8 |
| 0.0564 | 0.0947 | 9 |
| 0.0514 | 0.0973 | 10 |
| 0.0465 | 0.0983 | 11 |
| 0.0416 | 0.1022 | 12 |
| 0.0370 | 0.1007 | 13 |
| 0.0326 | 0.1029 | 14 |
| 0.0285 | 0.1075 | 15 |
| 0.0247 | 0.1043 | 16 |
| 0.0213 | 0.1052 | 17 |
| 0.0180 | 0.1079 | 18 |
| 0.0153 | 0.1092 | 19 |
| 0.0128 | 0.1139 | 20 |
| 0.0106 | 0.1136 | 21 |
| 0.0089 | 0.1192 | 22 |
| 0.0076 | 0.1212 | 23 |
| 0.0064 | 0.1154 | 24 |
| 0.0055 | 0.1199 | 25 |
| 0.0048 | 0.1220 | 26 |
| 0.0042 | 0.1258 | 27 |
| 0.0038 | 0.1272 | 28 |
| 0.0034 | 0.1296 | 29 |
### Framework versions
- Transformers 4.28.0.dev0
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Cheatham/xlm-roberta-large-finetuned-d12
|
[
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 20 | null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.21 +/- 1.19
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Cheatham/xlm-roberta-large-finetuned-d1r01
|
[
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 21 | 2023-03-16T01:26:41Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 44.36 +/- 47.07
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Cheatham/xlm-roberta-large-finetuned
|
[
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 20 | null |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: ja
datasets:
- lmqg/qg_jaquad
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: "ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている<hl>6月28日<hl>は2人の14回目の結婚記念日であった。"
example_title: "Question Generation Example 1"
- text: "『クマのプーさん』の物語はまず1925年12月24日、『イヴニング・ニュース』紙のクリスマス特集号に短編作品として掲載された。これは『クマのプーさん』の第一章にあたる作品で、このときだけは挿絵をJ.H.ダウドがつけている。その後作品10話と挿絵が整い、刊行に先駆けて「イーヨーの誕生日」のエピソードが1926年8月に『ロイヤルマガジン』に、同年10月9日に『ニューヨーク・イヴニング・ポスト』紙に掲載されたあと、同年10月14日にロンドンで(メシュエン社)、21日にニューヨークで(ダットン社)『クマのプーさん』が刊行された。前著『ぼくたちがとてもちいさかったころ』がすでに大きな成功を収めていたこともあり、イギリスでは初版は前著の7倍に当たる<hl>3万5000部<hl>が刷られた。他方のアメリカでもその年の終わりまでに15万部を売り上げている。ただし依然として人気のあった前著を売り上げで追い越すには数年の時間を要した。"
example_title: "Question Generation Example 2"
- text: "フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め<hl>30数点<hl>しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。以下には若干の疑問作も含め、37点の基本情報を記載し、各作品について略説する。収録順序、推定制作年代は『「フェルメールとその時代展」図録』による。日本語の作品タイトルについては、上掲図録のほか、『「フェルメール展」図録』、『フェルメール生涯と作品』による。便宜上「1650年代の作品」「1660年代の作品」「1670年代の作品」の3つの節を設けたが、フェルメールの作品には制作年代不明のものが多く、推定制作年代については研究者や文献によって若干の差がある。"
example_title: "Question Generation Example 3"
model-index:
- name: vocabtrimmer/mt5-small-trimmed-ja-5000-jaquad-qg
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_jaquad
type: default
args: default
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 25.03
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 47.09
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 26.62
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 79.76
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 57.12
---
# Model Card of `vocabtrimmer/mt5-small-trimmed-ja-5000-jaquad-qg`
This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-ja-5000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ja-5000) for question generation task on the [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [vocabtrimmer/mt5-small-trimmed-ja-5000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ja-5000)
- **Language:** ja
- **Training data:** [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="ja", model="vocabtrimmer/mt5-small-trimmed-ja-5000-jaquad-qg")
# model prediction
questions = model.generate_q(list_context="フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め30数点しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。", list_answer="30数点")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-ja-5000-jaquad-qg")
output = pipe("ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている<hl>6月28日<hl>は2人の14回目の結婚記念日であった。")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ja-5000-jaquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_jaquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 79.76 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_1 | 51.21 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_2 | 38.74 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_3 | 30.76 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_4 | 25.03 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| METEOR | 26.62 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| MoverScore | 57.12 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| ROUGE_L | 47.09 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_jaquad
- dataset_name: default
- input_types: paragraph_answer
- output_types: question
- prefix_types: None
- model: vocabtrimmer/mt5-small-trimmed-ja-5000
- max_length: 512
- max_length_output: 32
- epoch: 16
- batch: 16
- lr: 0.001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ja-5000-jaquad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
Cheatham/xlm-roberta-large-finetuned3
|
[
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 22 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9295
- name: F1
type: f1
value: 0.9295658097560081
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2150
- Accuracy: 0.9295
- F1: 0.9296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8335 | 1.0 | 250 | 0.3067 | 0.914 | 0.9124 |
| 0.2462 | 2.0 | 500 | 0.2150 | 0.9295 | 0.9296 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.10.3
|
CheonggyeMountain-Sherpa/kogpt-trinity-punct-wrapper
|
[
"ko",
"gpt2",
"license:cc-by-nc-sa-4.0"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 25.30 +/- 17.17
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ChrisVCB/DialoGPT-medium-cmjs
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
# Vocabulary Trimmed [lmqg/mbart-large-cc25-frquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg): `vocabtrimmer/mbart-large-cc25-frquad-qg-trimmed-fr`
This model is a trimmed version of [lmqg/mbart-large-cc25-frquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mbart-large-cc25-frquad-qg | vocabtrimmer/mbart-large-cc25-frquad-qg-trimmed-fr |
|:---------------------------|:----------------------------------|:-----------------------------------------------------|
| parameter_size_full | 610,852,864 | 442,588,160 |
| parameter_size_embedding | 256,028,672 | 87,763,968 |
| vocab_size | 250,028 | 85,707 |
| compression_rate_full | 100.0 | 72.45 |
| compression_rate_embedding | 100.0 | 34.28 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| fr | vocabtrimmer/mc4_validation | text | fr | validation | | 2 |
|
CleveGreen/FieldClassifier_v2
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 46 | null |
---
'[object Object]': null
license: apache-2.0
language:
- en
tags:
- cell segmentation
- stardist
- hover-net
metrics:
- f1-score
pipeline_tag: image-segmentation
library_name: transformers
---
# Model Card for cell-seg-sribd
<!-- Provide a quick summary of what the model is/does. -->
This repository provides the solution of team Sribd-med for NeurIPS-CellSeg Challenge. The details of our method are described in our paper [Multi-stream Cell Segmentation with Low-level Cues for Multi-modality Images]. Some parts of the codes are from the baseline codes of the NeurIPS-CellSeg-Baseline repository,
You can reproduce our method as follows step by step:
### How to Get Started with the Model
Install requirements by python -m pip install -r requirements.txt
## Training Details
### Training Data
The competition training and tuning data can be downloaded from https://neurips22-cellseg.grand-challenge.org/dataset/ Besides, you can download three publiced data from the following link: Cellpose: https://www.cellpose.org/dataset Omnipose: http://www.cellpose.org/dataset_omnipose Sartorius: https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/overview
## Environments and Requirements:
Install requirements by
```shell
python -m pip install -r requirements.txt
```
## Dataset
The competition training and tuning data can be downloaded from https://neurips22-cellseg.grand-challenge.org/dataset/
Besides, you can download three publiced data from the following link:
Cellpose: https://www.cellpose.org/dataset
Omnipose: http://www.cellpose.org/dataset_omnipose
Sartorius: https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/overview
## Automatic cell classification
You can classify the cells into four classes in this step.
Put all the images (competition + Cellpose + Omnipose + Sartorius) in one folder (data/allimages).
Run classification code:
```shell
python classification/unsup_classification.py
```
The results can be stored in data/classification_results/
## CNN-base classification model training
Using the classified images in data/classification_results/. A resnet18 is trained:
```shell
python classification/train_classification.py
```
## Segmentation Training
Pre-training convnext-stardist using all the images (data/allimages).
```shell
python train_convnext_stardist.py
```
For class 0,2,3 finetune on the classified data (Take class1 as a example):
```shell
python finetune_convnext_stardist.py model_dir=(The pretrained convnext-stardist model) data_dir='data/classification_results/class1'
```
For class 1 train the convnext-hover from scratch using classified class 3 data.
```shell
python train_convnext_hover.py data_dir='data/classification_results/class3'
```
Finally, four segmentation models will be trained.
## Trained models
The models are in models/.
## Inference
The inference process includes classification and segmentation.
```shell
python predict.py -i input_path -o output_path --model_path './models'
```
## Evaluation
Calculate the F-score for evaluation:
```shell
python compute_metric.py --gt_path path_to_labels --seg_path output_path
```
## Results
The tuning set F1 score of our method is 0.8795. The rank running time of our method on all the 101 cases in the tuning set is zero in our local
workstation.
|
Contrastive-Tension/BERT-Large-CT
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
# Vocabulary Trimmed [lmqg/mbart-large-cc25-itquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-itquad-qg): `vocabtrimmer/mbart-large-cc25-itquad-qg-trimmed-it`
This model is a trimmed version of [lmqg/mbart-large-cc25-itquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-itquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mbart-large-cc25-itquad-qg | vocabtrimmer/mbart-large-cc25-itquad-qg-trimmed-it |
|:---------------------------|:----------------------------------|:-----------------------------------------------------|
| parameter_size_full | 610,852,864 | 424,257,536 |
| parameter_size_embedding | 512,057,344 | 138,866,688 |
| vocab_size | 250,028 | 67,806 |
| compression_rate_full | 100.0 | 69.45 |
| compression_rate_embedding | 100.0 | 27.12 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| it | vocabtrimmer/mc4_validation | text | it | validation | | 2 |
|
Contrastive-Tension/RoBerta-Large-CT-STSb
|
[
"pytorch",
"tf",
"jax",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-louka
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-louka
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 210 | 4.2854 |
| No log | 2.0 | 420 | 4.2395 |
| 4.2046 | 3.0 | 630 | 4.2182 |
| 4.2046 | 4.0 | 840 | 4.2138 |
| 3.9721 | 4.76 | 1000 | 4.2152 |
### Framework versions
- Transformers 4.27.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
CoveJH/ConBot
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 632.50 +/- 103.20
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ashishj20 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ashishj20 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ashishj20
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Crasher222/kaggle-comp-test
|
[
"pytorch",
"bert",
"text-classification",
"en",
"dataset:Crasher222/autonlp-data-kaggle-test",
"transformers",
"autonlp",
"co2_eq_emissions"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 29 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0341
- Accuracy: 0.7912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7441 | 1.0 | 4597 | 0.6021 | 0.7666 |
| 0.375 | 2.0 | 9194 | 0.6227 | 0.7862 |
| 0.1344 | 3.0 | 13791 | 1.0341 | 0.7912 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.0
- Tokenizers 0.13.2
|
CuongLD/wav2vec2-large-xlsr-vietnamese
|
[
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"vi",
"dataset:common_voice, infore_25h",
"arxiv:2006.11477",
"arxiv:2006.13979",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | 2023-03-16T06:52:23Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Q-Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jjlira/Q-Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Davlan/bert-base-multilingual-cased-finetuned-wolof
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-main-gpu-20e-final
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9916666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-main-gpu-20e-final
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0251
- Accuracy: 0.9917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5767 | 1.0 | 551 | 0.5565 | 0.7463 |
| 0.3985 | 2.0 | 1102 | 0.3165 | 0.8711 |
| 0.2988 | 3.0 | 1653 | 0.1835 | 0.9293 |
| 0.2449 | 4.0 | 2204 | 0.1150 | 0.9572 |
| 0.2037 | 5.0 | 2755 | 0.0993 | 0.9632 |
| 0.1646 | 6.0 | 3306 | 0.0750 | 0.9717 |
| 0.1995 | 7.0 | 3857 | 0.0610 | 0.9776 |
| 0.1659 | 8.0 | 4408 | 0.0485 | 0.9815 |
| 0.1449 | 9.0 | 4959 | 0.0505 | 0.9821 |
| 0.1315 | 10.0 | 5510 | 0.0444 | 0.9843 |
| 0.102 | 11.0 | 6061 | 0.0440 | 0.9838 |
| 0.1039 | 12.0 | 6612 | 0.0359 | 0.9870 |
| 0.0798 | 13.0 | 7163 | 0.0393 | 0.9869 |
| 0.1033 | 14.0 | 7714 | 0.0343 | 0.9890 |
| 0.078 | 15.0 | 8265 | 0.0298 | 0.9902 |
| 0.0765 | 16.0 | 8816 | 0.0299 | 0.9901 |
| 0.0769 | 17.0 | 9367 | 0.0275 | 0.9908 |
| 0.0751 | 18.0 | 9918 | 0.0271 | 0.9910 |
| 0.0822 | 19.0 | 10469 | 0.0251 | 0.9917 |
| 0.0756 | 20.0 | 11020 | 0.0254 | 0.9913 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Davlan/xlm-roberta-large-masakhaner
|
[
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,449 | null |
Tiny BERT finetuned on [Kaggle English Fake News detection dataset.](https://www.kaggle.com/datasets/sadikaljarif/fake-news-detection-dataset-english)
|
Declan/NewYorkTimes_model_v6
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
# Vocabulary Trimmed [lmqg/mbart-large-cc25-koquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-koquad-qg): `vocabtrimmer/mbart-large-cc25-koquad-qg-trimmed-ko-10000`
This model is a trimmed version of [lmqg/mbart-large-cc25-koquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-koquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mbart-large-cc25-koquad-qg | vocabtrimmer/mbart-large-cc25-koquad-qg-trimmed-ko-10000 |
|:---------------------------|:----------------------------------|:-----------------------------------------------------------|
| parameter_size_full | 610,852,864 | 365,068,288 |
| parameter_size_embedding | 512,057,344 | 20,488,192 |
| vocab_size | 250,028 | 10,004 |
| compression_rate_full | 100.0 | 59.76 |
| compression_rate_embedding | 100.0 | 4.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ko | vocabtrimmer/mc4_validation | text | ko | validation | 10000 | 2 |
|
Dev-DGT/food-dbert-multiling
|
[
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 17 | null |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: jrreda/rl01-ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Devid/DialoGPT-small-Miku
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | 2023-03-16T14:18:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7651 | 1.0 | 2334 | 3.6675 |
| 3.6372 | 2.0 | 4668 | 3.6469 |
| 3.5914 | 3.0 | 7002 | 3.6442 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.0+cu102
- Datasets 2.7.1.dev0
- Tokenizers 0.12.1
|
Digakive/Hsgshs
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
# Vocabulary Trimmed [lmqg/mbart-large-cc25-koquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-koquad-qg): `vocabtrimmer/mbart-large-cc25-koquad-qg-trimmed-ko-30000`
This model is a trimmed version of [lmqg/mbart-large-cc25-koquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-koquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mbart-large-cc25-koquad-qg | vocabtrimmer/mbart-large-cc25-koquad-qg-trimmed-ko-30000 |
|:---------------------------|:----------------------------------|:-----------------------------------------------------------|
| parameter_size_full | 610,852,864 | 385,548,288 |
| parameter_size_embedding | 512,057,344 | 61,448,192 |
| vocab_size | 250,028 | 30,004 |
| compression_rate_full | 100.0 | 63.12 |
| compression_rate_embedding | 100.0 | 12.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ko | vocabtrimmer/mc4_validation | text | ko | validation | 30000 | 2 |
|
Dilmk2/DialoGPT-small-harrypotter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13 | null |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **sac** Agent playing **Pyramids**
This is a trained model of a **sac** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: EExe/sac-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Dizoid/Lll
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
- keras-dreambooth
- wild-card
license: creativeml-openrail-m
inference: true
library_name: keras
---
## Model description
The Ignatius Farray dreambooth model would be a sleek and modern diffusion model designed to transport users into a world of absurdity and hilarity.
I cannot promise that all the images would be adorned with bright, eye-catching colors and images that reflect Ignatius' unique sense of style and humor.
## Images generated by model

## Intended uses & limitations
You can use to create images based on Ignatius and put him in different situations. Try not to use for bad purpose and use the "commedia" on it.
## Training and evaluation data
To train this model, this was the training [notebook](https://colab.research.google.com/github/huggingface/community-events/blob/main/keras-dreambooth-sprint/Dreambooth_on_Hub.ipynb) and the trainig dataset was this [one](https://huggingface.co/datasets/matallanas/ignatius)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| inner_optimizer.class_name | Custom>RMSprop |
| inner_optimizer.config.name | RMSprop |
| inner_optimizer.config.weight_decay | None |
| inner_optimizer.config.clipnorm | None |
| inner_optimizer.config.global_clipnorm | None |
| inner_optimizer.config.clipvalue | None |
| inner_optimizer.config.use_ema | False |
| inner_optimizer.config.ema_momentum | 0.99 |
| inner_optimizer.config.ema_overwrite_frequency | 100 |
| inner_optimizer.config.jit_compile | True |
| inner_optimizer.config.is_legacy_optimizer | False |
| inner_optimizer.config.learning_rate | 0.0010000000474974513 |
| inner_optimizer.config.rho | 0.9 |
| inner_optimizer.config.momentum | 0.0 |
| inner_optimizer.config.epsilon | 1e-07 |
| inner_optimizer.config.centered | False |
| dynamic | True |
| initial_scale | 32768.0 |
| dynamic_growth_steps | 2000 |
| training_precision | mixed_float16 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
## Usage
The instance token used is "ignatius". A prompt example is as follows "a photo of ignatius on a car"
```python
from huggingface_hub import from_pretrained_keras
import keras_cv
sd_dreambooth_model = keras_cv.models.StableDiffusion(
img_width=resolution, img_height=resolution, jit_compile=True,
)
loaded_diffusion_model = from_pretrained_keras("keras-dreambooth/ignatius")
sd_dreambooth_model._diffusion_model = loaded_diffusion_model
prompt = f"ignatius on the moon"
#generated_img = sd_dreambooth_model.text_to_image(
generated_img = dreambooth_model.text_to_image(
prompt,
batch_size=4,
num_steps=150,
unconditional_guidance_scale=15,
)
```
|
Dkwkk/W
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- BreakoutNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BreakoutNoFrameskip-v4
type: BreakoutNoFrameskip-v4
metrics:
- type: mean_reward
value: 29.70 +/- 8.01
name: mean_reward
verified: false
---
# **DQN** Agent playing **BreakoutNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **BreakoutNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env BreakoutNoFrameskip-v4 -orga dmenini -f logs/
python -m rl_zoo3.enjoy --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env BreakoutNoFrameskip-v4 -orga dmenini -f logs/
python -m rl_zoo3.enjoy --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env BreakoutNoFrameskip-v4 -f logs/ -orga dmenini
```
## Hyperparameters
```python
OrderedDict([('batch_size', 100),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1500000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Dmitriiserg/Pxd
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-03-16T14:58:47Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: dussinus/PPO-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DongHyoungLee/distilbert-base-uncased-finetuned-cola
|
[
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] |
text-classification
|
{
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 27 | null |
---
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: wavlm-base-plus_zh_tw_ver2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
config: zh-TW
split: test
args: zh-TW
metrics:
- name: Wer
type: wer
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-base-plus_zh_tw_ver2
This model is a fine-tuned version of [microsoft/wavlm-base-plus](https://huggingface.co/microsoft/wavlm-base-plus) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5278
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| 82.628 | 2.5 | 500 | 79.5587 | 1.0 |
| 17.5635 | 5.0 | 1000 | 11.5929 | 1.0 |
| 6.4288 | 7.5 | 1500 | 6.4475 | 1.0 |
| 6.4092 | 10.0 | 2000 | 6.4579 | 1.0 |
| 6.3982 | 12.5 | 2500 | 6.4662 | 1.0 |
| 6.391 | 15.0 | 3000 | 6.4655 | 1.0 |
| 6.4097 | 17.5 | 3500 | 6.4691 | 1.0 |
| 6.3986 | 20.0 | 4000 | 6.4702 | 1.0 |
| 6.4069 | 22.5 | 4500 | 6.4761 | 1.0 |
| 6.4158 | 25.0 | 5000 | 6.4750 | 1.0 |
| 6.4117 | 27.5 | 5500 | 6.4816 | 1.0 |
| 6.4086 | 30.0 | 6000 | 6.4806 | 1.0 |
| 6.3992 | 32.5 | 6500 | 6.4872 | 1.0 |
| 6.3946 | 35.0 | 7000 | 6.4866 | 1.0 |
| 6.4212 | 37.5 | 7500 | 6.4895 | 1.0 |
| 6.4051 | 40.0 | 8000 | 6.4926 | 1.0 |
| 6.398 | 42.5 | 8500 | 6.5015 | 1.0 |
| 6.3967 | 45.0 | 9000 | 6.4960 | 1.0 |
| 6.4096 | 47.5 | 9500 | 6.5003 | 1.0 |
| 6.4068 | 50.0 | 10000 | 6.5026 | 1.0 |
| 6.4062 | 52.5 | 10500 | 6.5071 | 1.0 |
| 6.395 | 55.0 | 11000 | 6.5066 | 1.0 |
| 6.4079 | 57.5 | 11500 | 6.5093 | 1.0 |
| 6.411 | 60.0 | 12000 | 6.5106 | 1.0 |
| 6.4023 | 62.5 | 12500 | 6.5112 | 1.0 |
| 6.4053 | 65.0 | 13000 | 6.5143 | 1.0 |
| 6.4103 | 67.5 | 13500 | 6.5172 | 1.0 |
| 6.3899 | 70.0 | 14000 | 6.5182 | 1.0 |
| 6.4054 | 72.5 | 14500 | 6.5197 | 1.0 |
| 6.391 | 75.0 | 15000 | 6.5200 | 1.0 |
| 6.3988 | 77.5 | 15500 | 6.5220 | 1.0 |
| 6.4059 | 80.0 | 16000 | 6.5228 | 1.0 |
| 6.392 | 82.5 | 16500 | 6.5233 | 1.0 |
| 6.3947 | 85.0 | 17000 | 6.5253 | 1.0 |
| 6.3966 | 87.5 | 17500 | 6.5259 | 1.0 |
| 6.3905 | 90.0 | 18000 | 6.5264 | 1.0 |
| 6.4003 | 92.5 | 18500 | 6.5272 | 1.0 |
| 6.3877 | 95.0 | 19000 | 6.5275 | 1.0 |
| 6.3903 | 97.5 | 19500 | 6.5277 | 1.0 |
| 6.3944 | 100.0 | 20000 | 6.5278 | 1.0 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.0+cu102
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Doogie/Waynehills-KE-T5-doogie
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: game-ad-0306_outputs
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: ./data/games-ad-0306
type: imagefolder
config: games-ad-0306
split: train
args: games-ad-0306
metrics:
- name: Accuracy
type: accuracy
value: 0.3024054982817869
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# game-ad-0306_outputs
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the ./data/games-ad-0306 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6235
- Accuracy: 0.3024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13373
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:------:|:---------------:|:--------:|
| 3.2891 | 1.0 | 103 | 3.0266 | 0.2165 |
| 2.9971 | 2.0 | 206 | 2.9194 | 0.2302 |
| 2.9151 | 3.0 | 309 | 2.8731 | 0.2474 |
| 2.8579 | 4.0 | 412 | 2.8072 | 0.2715 |
| 2.7768 | 5.0 | 515 | 2.7918 | 0.2577 |
| 2.7184 | 6.0 | 618 | 2.7296 | 0.2818 |
| 2.648 | 7.0 | 721 | 2.7044 | 0.2921 |
| 2.5884 | 8.0 | 824 | 2.7190 | 0.2680 |
| 2.5146 | 9.0 | 927 | 2.6942 | 0.2784 |
| 2.4384 | 10.0 | 1030 | 2.6877 | 0.2921 |
| 2.442 | 11.0 | 1133 | 2.6412 | 0.2818 |
| 2.3099 | 12.0 | 1236 | 2.6331 | 0.2852 |
| 2.2685 | 13.0 | 1339 | 2.6451 | 0.2990 |
| 2.182 | 14.0 | 1442 | 2.6927 | 0.2715 |
| 2.1421 | 15.0 | 1545 | 2.6615 | 0.3162 |
| 2.0483 | 16.0 | 1648 | 2.6500 | 0.3230 |
| 1.9884 | 17.0 | 1751 | 2.6527 | 0.2990 |
| 1.9316 | 18.0 | 1854 | 2.6736 | 0.2990 |
| 1.8785 | 19.0 | 1957 | 2.6391 | 0.2921 |
| 1.788 | 20.0 | 2060 | 2.7002 | 0.3127 |
| 1.7115 | 21.0 | 2163 | 2.8321 | 0.2715 |
| 1.6929 | 22.0 | 2266 | 2.6235 | 0.3024 |
| 1.6239 | 23.0 | 2369 | 2.6378 | 0.3058 |
| 1.5387 | 24.0 | 2472 | 2.6888 | 0.3127 |
| 1.5095 | 25.0 | 2575 | 2.6888 | 0.3127 |
| 1.4153 | 26.0 | 2678 | 2.6771 | 0.2715 |
| 1.4254 | 27.0 | 2781 | 2.7354 | 0.2887 |
| 1.3351 | 28.0 | 2884 | 2.7175 | 0.2990 |
| 1.2955 | 29.0 | 2987 | 2.7679 | 0.2818 |
| 1.2232 | 30.0 | 3090 | 2.7784 | 0.2921 |
| 1.2115 | 31.0 | 3193 | 2.8496 | 0.2749 |
| 1.1656 | 32.0 | 3296 | 2.7899 | 0.2818 |
| 1.1419 | 33.0 | 3399 | 2.7646 | 0.2715 |
| 1.0481 | 34.0 | 3502 | 2.8416 | 0.2715 |
| 0.9763 | 35.0 | 3605 | 2.8370 | 0.3024 |
| 0.9452 | 36.0 | 3708 | 2.7904 | 0.2955 |
| 0.9178 | 37.0 | 3811 | 2.8309 | 0.2715 |
| 0.9115 | 38.0 | 3914 | 2.8584 | 0.3093 |
| 0.8472 | 39.0 | 4017 | 2.9066 | 0.2612 |
| 0.8323 | 40.0 | 4120 | 2.8630 | 0.2921 |
| 0.7622 | 41.0 | 4223 | 3.0020 | 0.2680 |
| 0.7531 | 42.0 | 4326 | 2.8885 | 0.2921 |
| 0.7054 | 43.0 | 4429 | 2.8820 | 0.2818 |
| 0.685 | 44.0 | 4532 | 2.8764 | 0.3162 |
| 0.7206 | 45.0 | 4635 | 2.8659 | 0.3162 |
| 0.6304 | 46.0 | 4738 | 2.9537 | 0.2887 |
| 0.6369 | 47.0 | 4841 | 2.9660 | 0.2509 |
| 0.6161 | 48.0 | 4944 | 3.1112 | 0.2543 |
| 0.618 | 49.0 | 5047 | 2.9729 | 0.2990 |
| 0.556 | 50.0 | 5150 | 2.9870 | 0.2921 |
| 0.5314 | 51.0 | 5253 | 2.9934 | 0.3093 |
| 0.5502 | 52.0 | 5356 | 2.9379 | 0.2818 |
| 0.4958 | 53.0 | 5459 | 3.0344 | 0.3024 |
| 0.4896 | 54.0 | 5562 | 2.9924 | 0.2749 |
| 0.4803 | 55.0 | 5665 | 3.0161 | 0.3127 |
| 0.4554 | 56.0 | 5768 | 3.0221 | 0.2818 |
| 0.4591 | 57.0 | 5871 | 3.0461 | 0.3024 |
| 0.4349 | 58.0 | 5974 | 3.1377 | 0.3265 |
| 0.4127 | 59.0 | 6077 | 3.0169 | 0.2955 |
| 0.3973 | 60.0 | 6180 | 3.0338 | 0.2818 |
| 0.4109 | 61.0 | 6283 | 3.0638 | 0.2818 |
| 0.3872 | 62.0 | 6386 | 3.0810 | 0.2818 |
| 0.3693 | 63.0 | 6489 | 3.2003 | 0.2715 |
| 0.3457 | 64.0 | 6592 | 3.0843 | 0.2990 |
| 0.3521 | 65.0 | 6695 | 3.1623 | 0.3058 |
| 0.3625 | 66.0 | 6798 | 3.0036 | 0.3299 |
| 0.3339 | 67.0 | 6901 | 3.2389 | 0.2921 |
| 0.3378 | 68.0 | 7004 | 3.2493 | 0.2990 |
| 0.2981 | 69.0 | 7107 | 3.1308 | 0.2955 |
| 0.3023 | 70.0 | 7210 | 3.2455 | 0.3093 |
| 0.3076 | 71.0 | 7313 | 3.2725 | 0.2887 |
| 0.3201 | 72.0 | 7416 | 3.2563 | 0.2887 |
| 0.3083 | 73.0 | 7519 | 3.2520 | 0.2921 |
| 0.2906 | 74.0 | 7622 | 3.3344 | 0.3093 |
| 0.2721 | 75.0 | 7725 | 3.1952 | 0.2852 |
| 0.2873 | 76.0 | 7828 | 3.2529 | 0.3058 |
| 0.278 | 77.0 | 7931 | 3.3428 | 0.2818 |
| 0.2573 | 78.0 | 8034 | 3.3216 | 0.2784 |
| 0.2578 | 79.0 | 8137 | 3.4178 | 0.2955 |
| 0.2774 | 80.0 | 8240 | 3.3449 | 0.2818 |
| 0.2762 | 81.0 | 8343 | 3.3452 | 0.2749 |
| 0.2504 | 82.0 | 8446 | 3.5792 | 0.2955 |
| 0.2552 | 83.0 | 8549 | 3.3478 | 0.2818 |
| 0.2541 | 84.0 | 8652 | 3.4902 | 0.2784 |
| 0.2616 | 85.0 | 8755 | 3.2829 | 0.3127 |
| 0.2079 | 86.0 | 8858 | 3.5287 | 0.3162 |
| 0.2538 | 87.0 | 8961 | 3.4731 | 0.3196 |
| 0.2485 | 88.0 | 9064 | 3.5998 | 0.2646 |
| 0.2714 | 89.0 | 9167 | 3.4567 | 0.2921 |
| 0.232 | 90.0 | 9270 | 3.5061 | 0.2818 |
| 0.2577 | 91.0 | 9373 | 3.5370 | 0.2921 |
| 0.2232 | 92.0 | 9476 | 3.5062 | 0.2509 |
| 0.2351 | 93.0 | 9579 | 3.5592 | 0.2784 |
| 0.2299 | 94.0 | 9682 | 3.5167 | 0.3333 |
| 0.2415 | 95.0 | 9785 | 3.6283 | 0.2887 |
| 0.2265 | 96.0 | 9888 | 3.4819 | 0.2852 |
| 0.2448 | 97.0 | 9991 | 3.5793 | 0.2990 |
| 0.2141 | 98.0 | 10094 | 3.5728 | 0.2887 |
| 0.1979 | 99.0 | 10197 | 3.4685 | 0.2921 |
| 0.2077 | 100.0 | 10300 | 3.5586 | 0.3230 |
| 0.1854 | 101.0 | 10403 | 3.5650 | 0.3162 |
| 0.2017 | 102.0 | 10506 | 3.4760 | 0.2921 |
| 0.2119 | 103.0 | 10609 | 3.5531 | 0.2784 |
| 0.2314 | 104.0 | 10712 | 3.5118 | 0.3024 |
| 0.212 | 105.0 | 10815 | 3.5496 | 0.3196 |
| 0.197 | 106.0 | 10918 | 3.6080 | 0.2543 |
| 0.2067 | 107.0 | 11021 | 3.6217 | 0.2887 |
| 0.1896 | 108.0 | 11124 | 3.6446 | 0.3230 |
| 0.198 | 109.0 | 11227 | 3.7699 | 0.2784 |
| 0.2152 | 110.0 | 11330 | 3.6709 | 0.3162 |
| 0.2121 | 111.0 | 11433 | 3.6266 | 0.3368 |
| 0.1869 | 112.0 | 11536 | 3.6681 | 0.2955 |
| 0.1927 | 113.0 | 11639 | 3.7305 | 0.3162 |
| 0.2259 | 114.0 | 11742 | 3.6302 | 0.3127 |
| 0.1809 | 115.0 | 11845 | 3.6301 | 0.3093 |
| 0.2071 | 116.0 | 11948 | 3.7288 | 0.3127 |
| 0.1977 | 117.0 | 12051 | 3.6467 | 0.3058 |
| 0.1902 | 118.0 | 12154 | 3.7039 | 0.3093 |
| 0.1996 | 119.0 | 12257 | 3.9013 | 0.3093 |
| 0.2122 | 120.0 | 12360 | 3.8228 | 0.2990 |
| 0.1702 | 121.0 | 12463 | 3.7118 | 0.3162 |
| 0.1889 | 122.0 | 12566 | 3.7211 | 0.3162 |
| 0.1857 | 123.0 | 12669 | 3.8894 | 0.2509 |
| 0.2003 | 124.0 | 12772 | 3.6575 | 0.3093 |
| 0.202 | 125.0 | 12875 | 3.7925 | 0.3333 |
| 0.1722 | 126.0 | 12978 | 3.8188 | 0.2818 |
| 0.1716 | 127.0 | 13081 | 3.9584 | 0.3162 |
| 0.1598 | 128.0 | 13184 | 3.7732 | 0.3265 |
| 0.1825 | 129.0 | 13287 | 3.8038 | 0.3196 |
| 0.1716 | 130.0 | 13390 | 3.7606 | 0.3196 |
| 0.179 | 131.0 | 13493 | 3.7458 | 0.2955 |
| 0.1817 | 132.0 | 13596 | 3.8413 | 0.2955 |
| 0.1606 | 133.0 | 13699 | 3.8766 | 0.3196 |
| 0.1625 | 134.0 | 13802 | 3.8188 | 0.3230 |
| 0.1622 | 135.0 | 13905 | 3.7223 | 0.2955 |
| 0.1852 | 136.0 | 14008 | 3.7774 | 0.3024 |
| 0.1671 | 137.0 | 14111 | 3.8407 | 0.2612 |
| 0.1862 | 138.0 | 14214 | 3.7442 | 0.3196 |
| 0.1808 | 139.0 | 14317 | 3.8458 | 0.3093 |
| 0.1375 | 140.0 | 14420 | 3.7372 | 0.3024 |
| 0.1876 | 141.0 | 14523 | 3.9925 | 0.2990 |
| 0.1693 | 142.0 | 14626 | 3.9364 | 0.3058 |
| 0.1719 | 143.0 | 14729 | 3.9149 | 0.2818 |
| 0.1406 | 144.0 | 14832 | 3.8603 | 0.2955 |
| 0.1709 | 145.0 | 14935 | 3.9216 | 0.3196 |
| 0.1794 | 146.0 | 15038 | 3.8934 | 0.3058 |
| 0.1455 | 147.0 | 15141 | 4.0086 | 0.2784 |
| 0.1959 | 148.0 | 15244 | 3.9358 | 0.3024 |
| 0.1664 | 149.0 | 15347 | 3.9775 | 0.2921 |
| 0.1455 | 150.0 | 15450 | 3.9304 | 0.2990 |
| 0.1819 | 151.0 | 15553 | 4.0299 | 0.2715 |
| 0.1532 | 152.0 | 15656 | 4.1219 | 0.2680 |
| 0.1638 | 153.0 | 15759 | 4.1465 | 0.3093 |
| 0.1579 | 154.0 | 15862 | 4.0596 | 0.2955 |
| 0.1668 | 155.0 | 15965 | 4.0857 | 0.3127 |
| 0.1401 | 156.0 | 16068 | 4.1669 | 0.2921 |
| 0.1452 | 157.0 | 16171 | 4.0430 | 0.2887 |
| 0.1568 | 158.0 | 16274 | 4.0157 | 0.2990 |
| 0.1771 | 159.0 | 16377 | 4.0770 | 0.3093 |
| 0.1383 | 160.0 | 16480 | 4.0888 | 0.2680 |
| 0.1572 | 161.0 | 16583 | 4.2271 | 0.2646 |
| 0.1472 | 162.0 | 16686 | 4.0215 | 0.2852 |
| 0.1534 | 163.0 | 16789 | 4.2248 | 0.3058 |
| 0.136 | 164.0 | 16892 | 4.2159 | 0.2852 |
| 0.1525 | 165.0 | 16995 | 4.0565 | 0.2990 |
| 0.1418 | 166.0 | 17098 | 4.1175 | 0.2852 |
| 0.1374 | 167.0 | 17201 | 4.1708 | 0.2921 |
| 0.1538 | 168.0 | 17304 | 4.2566 | 0.2784 |
| 0.1365 | 169.0 | 17407 | 4.3063 | 0.2577 |
| 0.1661 | 170.0 | 17510 | 4.2231 | 0.2887 |
| 0.1278 | 171.0 | 17613 | 4.3125 | 0.2646 |
| 0.1418 | 172.0 | 17716 | 4.3337 | 0.2646 |
| 0.1538 | 173.0 | 17819 | 4.3129 | 0.2852 |
| 0.1315 | 174.0 | 17922 | 4.3102 | 0.2680 |
| 0.128 | 175.0 | 18025 | 4.2853 | 0.2749 |
| 0.1398 | 176.0 | 18128 | 4.1560 | 0.2715 |
| 0.1525 | 177.0 | 18231 | 4.1812 | 0.2955 |
| 0.1603 | 178.0 | 18334 | 4.1262 | 0.3093 |
| 0.1412 | 179.0 | 18437 | 4.2778 | 0.2887 |
| 0.1521 | 180.0 | 18540 | 4.2881 | 0.2680 |
| 0.1404 | 181.0 | 18643 | 4.3147 | 0.2852 |
| 0.1468 | 182.0 | 18746 | 4.2042 | 0.2749 |
| 0.1448 | 183.0 | 18849 | 4.2110 | 0.2784 |
| 0.1299 | 184.0 | 18952 | 4.2314 | 0.2921 |
| 0.1361 | 185.0 | 19055 | 4.2993 | 0.2749 |
| 0.1455 | 186.0 | 19158 | 4.3509 | 0.3058 |
| 0.1345 | 187.0 | 19261 | 4.2828 | 0.2921 |
| 0.1394 | 188.0 | 19364 | 4.1001 | 0.3093 |
| 0.1415 | 189.0 | 19467 | 4.2179 | 0.2955 |
| 0.1235 | 190.0 | 19570 | 4.2963 | 0.3093 |
| 0.1373 | 191.0 | 19673 | 4.1833 | 0.2715 |
| 0.1323 | 192.0 | 19776 | 4.3057 | 0.2852 |
| 0.1188 | 193.0 | 19879 | 4.3819 | 0.2749 |
| 0.1528 | 194.0 | 19982 | 4.3091 | 0.2749 |
| 0.1365 | 195.0 | 20085 | 4.3870 | 0.2887 |
| 0.1187 | 196.0 | 20188 | 4.2303 | 0.2715 |
| 0.1409 | 197.0 | 20291 | 4.2344 | 0.2784 |
| 0.1346 | 198.0 | 20394 | 4.0637 | 0.3162 |
| 0.1449 | 199.0 | 20497 | 4.3022 | 0.2852 |
| 0.1415 | 200.0 | 20600 | 4.2672 | 0.2990 |
| 0.1283 | 201.0 | 20703 | 4.2363 | 0.2749 |
| 0.1469 | 202.0 | 20806 | 4.2714 | 0.2990 |
| 0.1288 | 203.0 | 20909 | 4.3246 | 0.2818 |
| 0.1334 | 204.0 | 21012 | 4.1711 | 0.2887 |
| 0.1419 | 205.0 | 21115 | 4.3263 | 0.2784 |
| 0.1395 | 206.0 | 21218 | 4.2855 | 0.2990 |
| 0.1255 | 207.0 | 21321 | 4.4301 | 0.2474 |
| 0.1288 | 208.0 | 21424 | 4.3735 | 0.2955 |
| 0.1395 | 209.0 | 21527 | 4.3549 | 0.2852 |
| 0.1144 | 210.0 | 21630 | 4.4569 | 0.2715 |
| 0.1185 | 211.0 | 21733 | 4.5008 | 0.2921 |
| 0.1578 | 212.0 | 21836 | 4.2313 | 0.2818 |
| 0.1434 | 213.0 | 21939 | 4.4445 | 0.2715 |
| 0.1147 | 214.0 | 22042 | 4.4329 | 0.2818 |
| 0.1239 | 215.0 | 22145 | 4.4102 | 0.2715 |
| 0.1315 | 216.0 | 22248 | 4.2503 | 0.2955 |
| 0.1413 | 217.0 | 22351 | 4.5559 | 0.2955 |
| 0.1137 | 218.0 | 22454 | 4.4504 | 0.2990 |
| 0.1412 | 219.0 | 22557 | 4.3377 | 0.3058 |
| 0.1051 | 220.0 | 22660 | 4.5250 | 0.2852 |
| 0.1314 | 221.0 | 22763 | 4.4539 | 0.2646 |
| 0.1284 | 222.0 | 22866 | 4.3481 | 0.2921 |
| 0.1159 | 223.0 | 22969 | 4.4284 | 0.3127 |
| 0.1219 | 224.0 | 23072 | 4.5069 | 0.2749 |
| 0.1183 | 225.0 | 23175 | 4.5461 | 0.2990 |
| 0.1172 | 226.0 | 23278 | 4.3986 | 0.2921 |
| 0.1216 | 227.0 | 23381 | 4.5154 | 0.3127 |
| 0.1207 | 228.0 | 23484 | 4.4848 | 0.2887 |
| 0.1303 | 229.0 | 23587 | 4.3925 | 0.2921 |
| 0.1238 | 230.0 | 23690 | 4.3748 | 0.2990 |
| 0.1126 | 231.0 | 23793 | 4.4806 | 0.3127 |
| 0.1227 | 232.0 | 23896 | 4.4439 | 0.2921 |
| 0.1146 | 233.0 | 23999 | 4.5228 | 0.2921 |
| 0.1168 | 234.0 | 24102 | 4.5614 | 0.2887 |
| 0.1219 | 235.0 | 24205 | 4.4129 | 0.2921 |
| 0.1181 | 236.0 | 24308 | 4.5444 | 0.2990 |
| 0.1167 | 237.0 | 24411 | 4.4038 | 0.2749 |
| 0.1173 | 238.0 | 24514 | 4.3967 | 0.3230 |
| 0.1052 | 239.0 | 24617 | 4.5055 | 0.2887 |
| 0.1216 | 240.0 | 24720 | 4.5693 | 0.3024 |
| 0.1242 | 241.0 | 24823 | 4.4906 | 0.2852 |
| 0.1553 | 242.0 | 24926 | 4.4971 | 0.2990 |
| 0.1377 | 243.0 | 25029 | 4.4536 | 0.2818 |
| 0.1126 | 244.0 | 25132 | 4.5324 | 0.2852 |
| 0.1321 | 245.0 | 25235 | 4.8037 | 0.2646 |
| 0.115 | 246.0 | 25338 | 4.6682 | 0.2715 |
| 0.1311 | 247.0 | 25441 | 4.6374 | 0.3196 |
| 0.1224 | 248.0 | 25544 | 4.7803 | 0.2680 |
| 0.1291 | 249.0 | 25647 | 4.6564 | 0.3093 |
| 0.1138 | 250.0 | 25750 | 4.5188 | 0.3024 |
| 0.1159 | 251.0 | 25853 | 4.5116 | 0.2990 |
| 0.1172 | 252.0 | 25956 | 4.7039 | 0.2921 |
| 0.1256 | 253.0 | 26059 | 4.6462 | 0.2852 |
| 0.1227 | 254.0 | 26162 | 4.7470 | 0.2852 |
| 0.1186 | 255.0 | 26265 | 4.6541 | 0.2921 |
| 0.1114 | 256.0 | 26368 | 4.6005 | 0.2887 |
| 0.1154 | 257.0 | 26471 | 4.5707 | 0.2818 |
| 0.1229 | 258.0 | 26574 | 4.5180 | 0.2749 |
| 0.1138 | 259.0 | 26677 | 4.6220 | 0.2818 |
| 0.0987 | 260.0 | 26780 | 4.6446 | 0.2921 |
| 0.1056 | 261.0 | 26883 | 4.7600 | 0.2715 |
| 0.1362 | 262.0 | 26986 | 4.6703 | 0.2680 |
| 0.1131 | 263.0 | 27089 | 4.6065 | 0.2715 |
| 0.1127 | 264.0 | 27192 | 4.5125 | 0.2784 |
| 0.1248 | 265.0 | 27295 | 4.5967 | 0.2921 |
| 0.111 | 266.0 | 27398 | 4.6182 | 0.2474 |
| 0.1203 | 267.0 | 27501 | 4.5969 | 0.2887 |
| 0.1242 | 268.0 | 27604 | 4.5437 | 0.2749 |
| 0.1041 | 269.0 | 27707 | 4.7105 | 0.2887 |
| 0.1233 | 270.0 | 27810 | 4.6305 | 0.2784 |
| 0.1003 | 271.0 | 27913 | 4.5865 | 0.2990 |
| 0.1144 | 272.0 | 28016 | 4.6216 | 0.2852 |
| 0.1061 | 273.0 | 28119 | 4.5387 | 0.2955 |
| 0.1102 | 274.0 | 28222 | 4.5850 | 0.2921 |
| 0.109 | 275.0 | 28325 | 4.6442 | 0.2921 |
| 0.1277 | 276.0 | 28428 | 4.5837 | 0.2612 |
| 0.1101 | 277.0 | 28531 | 4.7880 | 0.2784 |
| 0.1136 | 278.0 | 28634 | 4.5664 | 0.2646 |
| 0.1125 | 279.0 | 28737 | 4.7245 | 0.2990 |
| 0.1207 | 280.0 | 28840 | 4.7841 | 0.2852 |
| 0.1223 | 281.0 | 28943 | 4.7736 | 0.2852 |
| 0.1132 | 282.0 | 29046 | 4.6193 | 0.2852 |
| 0.1118 | 283.0 | 29149 | 4.7512 | 0.2921 |
| 0.1196 | 284.0 | 29252 | 4.7773 | 0.2680 |
| 0.1035 | 285.0 | 29355 | 4.6611 | 0.2921 |
| 0.1079 | 286.0 | 29458 | 4.6916 | 0.2921 |
| 0.1124 | 287.0 | 29561 | 4.6505 | 0.2680 |
| 0.1024 | 288.0 | 29664 | 4.6303 | 0.2680 |
| 0.101 | 289.0 | 29767 | 4.6079 | 0.2852 |
| 0.124 | 290.0 | 29870 | 4.4566 | 0.2887 |
| 0.1121 | 291.0 | 29973 | 4.5021 | 0.2887 |
| 0.1005 | 292.0 | 30076 | 4.5479 | 0.2852 |
| 0.1152 | 293.0 | 30179 | 4.6658 | 0.2749 |
| 0.113 | 294.0 | 30282 | 4.5608 | 0.2749 |
| 0.112 | 295.0 | 30385 | 4.6577 | 0.2852 |
| 0.1095 | 296.0 | 30488 | 4.5323 | 0.2784 |
| 0.1053 | 297.0 | 30591 | 4.6355 | 0.2921 |
| 0.1138 | 298.0 | 30694 | 4.7187 | 0.2852 |
| 0.1105 | 299.0 | 30797 | 4.6037 | 0.2784 |
| 0.0944 | 300.0 | 30900 | 4.7195 | 0.2646 |
| 0.1027 | 301.0 | 31003 | 4.6786 | 0.2749 |
| 0.0994 | 302.0 | 31106 | 4.7625 | 0.2990 |
| 0.1229 | 303.0 | 31209 | 4.8497 | 0.2715 |
| 0.1094 | 304.0 | 31312 | 4.7454 | 0.2612 |
| 0.1225 | 305.0 | 31415 | 4.7722 | 0.2818 |
| 0.102 | 306.0 | 31518 | 4.8431 | 0.2749 |
| 0.1283 | 307.0 | 31621 | 4.7977 | 0.2784 |
| 0.109 | 308.0 | 31724 | 4.6382 | 0.3127 |
| 0.1193 | 309.0 | 31827 | 4.7094 | 0.2543 |
| 0.1106 | 310.0 | 31930 | 4.7562 | 0.2921 |
| 0.1032 | 311.0 | 32033 | 4.7265 | 0.2577 |
| 0.114 | 312.0 | 32136 | 4.7516 | 0.2852 |
| 0.1265 | 313.0 | 32239 | 4.7882 | 0.2474 |
| 0.1252 | 314.0 | 32342 | 4.7084 | 0.2543 |
| 0.1102 | 315.0 | 32445 | 4.6895 | 0.2887 |
| 0.0984 | 316.0 | 32548 | 4.6341 | 0.3024 |
| 0.0978 | 317.0 | 32651 | 4.6211 | 0.3196 |
| 0.1068 | 318.0 | 32754 | 4.7675 | 0.2921 |
| 0.1017 | 319.0 | 32857 | 4.7061 | 0.2784 |
| 0.1138 | 320.0 | 32960 | 4.7139 | 0.2784 |
| 0.0997 | 321.0 | 33063 | 4.7117 | 0.2852 |
| 0.1036 | 322.0 | 33166 | 4.7136 | 0.3058 |
| 0.0988 | 323.0 | 33269 | 4.7139 | 0.2852 |
| 0.1052 | 324.0 | 33372 | 4.7646 | 0.3058 |
| 0.0957 | 325.0 | 33475 | 4.7901 | 0.2955 |
| 0.1009 | 326.0 | 33578 | 4.7048 | 0.2749 |
| 0.0957 | 327.0 | 33681 | 4.6212 | 0.2955 |
| 0.1244 | 328.0 | 33784 | 4.7481 | 0.2852 |
| 0.1021 | 329.0 | 33887 | 4.7497 | 0.2852 |
| 0.1017 | 330.0 | 33990 | 4.8310 | 0.2749 |
| 0.0957 | 331.0 | 34093 | 4.6941 | 0.3093 |
| 0.1042 | 332.0 | 34196 | 4.7253 | 0.3127 |
| 0.1046 | 333.0 | 34299 | 4.8593 | 0.2784 |
| 0.1103 | 334.0 | 34402 | 4.8480 | 0.2715 |
| 0.09 | 335.0 | 34505 | 4.9101 | 0.3162 |
| 0.1108 | 336.0 | 34608 | 4.7839 | 0.2887 |
| 0.1043 | 337.0 | 34711 | 4.9543 | 0.2680 |
| 0.104 | 338.0 | 34814 | 4.8026 | 0.2990 |
| 0.1015 | 339.0 | 34917 | 4.8008 | 0.2887 |
| 0.1029 | 340.0 | 35020 | 4.9069 | 0.2990 |
| 0.1002 | 341.0 | 35123 | 4.9242 | 0.3024 |
| 0.1076 | 342.0 | 35226 | 4.7199 | 0.2921 |
| 0.1055 | 343.0 | 35329 | 4.8440 | 0.3162 |
| 0.0925 | 344.0 | 35432 | 4.8572 | 0.3230 |
| 0.0827 | 345.0 | 35535 | 4.9133 | 0.3024 |
| 0.1105 | 346.0 | 35638 | 4.9865 | 0.2852 |
| 0.0875 | 347.0 | 35741 | 4.7973 | 0.2955 |
| 0.106 | 348.0 | 35844 | 4.8696 | 0.2955 |
| 0.1083 | 349.0 | 35947 | 4.9786 | 0.2646 |
| 0.105 | 350.0 | 36050 | 4.9114 | 0.2680 |
| 0.1075 | 351.0 | 36153 | 4.8693 | 0.2612 |
| 0.1026 | 352.0 | 36256 | 4.8735 | 0.2887 |
| 0.101 | 353.0 | 36359 | 5.0447 | 0.2646 |
| 0.0944 | 354.0 | 36462 | 4.9492 | 0.2784 |
| 0.1055 | 355.0 | 36565 | 4.9895 | 0.2715 |
| 0.0858 | 356.0 | 36668 | 5.0955 | 0.2440 |
| 0.0955 | 357.0 | 36771 | 5.0106 | 0.2990 |
| 0.1108 | 358.0 | 36874 | 4.9109 | 0.3058 |
| 0.1179 | 359.0 | 36977 | 4.9082 | 0.2852 |
| 0.0984 | 360.0 | 37080 | 4.8480 | 0.3058 |
| 0.0997 | 361.0 | 37183 | 4.8957 | 0.2715 |
| 0.1128 | 362.0 | 37286 | 4.9127 | 0.3058 |
| 0.0961 | 363.0 | 37389 | 5.0965 | 0.2784 |
| 0.1096 | 364.0 | 37492 | 5.0317 | 0.2887 |
| 0.0916 | 365.0 | 37595 | 4.9745 | 0.2887 |
| 0.1057 | 366.0 | 37698 | 4.8775 | 0.2680 |
| 0.0932 | 367.0 | 37801 | 5.0282 | 0.2680 |
| 0.1072 | 368.0 | 37904 | 4.8097 | 0.2646 |
| 0.0973 | 369.0 | 38007 | 4.9321 | 0.2749 |
| 0.1034 | 370.0 | 38110 | 4.8176 | 0.2715 |
| 0.1084 | 371.0 | 38213 | 4.8562 | 0.2852 |
| 0.0957 | 372.0 | 38316 | 4.9466 | 0.2852 |
| 0.1049 | 373.0 | 38419 | 4.8515 | 0.2612 |
| 0.097 | 374.0 | 38522 | 4.8833 | 0.2887 |
| 0.1008 | 375.0 | 38625 | 4.9442 | 0.2887 |
| 0.1019 | 376.0 | 38728 | 4.8345 | 0.2818 |
| 0.1083 | 377.0 | 38831 | 4.9350 | 0.2749 |
| 0.1181 | 378.0 | 38934 | 4.8605 | 0.2612 |
| 0.1043 | 379.0 | 39037 | 4.8783 | 0.2921 |
| 0.1212 | 380.0 | 39140 | 4.8641 | 0.2852 |
| 0.0941 | 381.0 | 39243 | 4.9772 | 0.2955 |
| 0.0986 | 382.0 | 39346 | 4.9191 | 0.2715 |
| 0.1054 | 383.0 | 39449 | 5.0695 | 0.2818 |
| 0.1066 | 384.0 | 39552 | 5.1141 | 0.2852 |
| 0.0929 | 385.0 | 39655 | 5.0176 | 0.2680 |
| 0.102 | 386.0 | 39758 | 4.7790 | 0.2749 |
| 0.103 | 387.0 | 39861 | 4.7348 | 0.2818 |
| 0.107 | 388.0 | 39964 | 4.6667 | 0.2921 |
| 0.0922 | 389.0 | 40067 | 4.6687 | 0.2680 |
| 0.102 | 390.0 | 40170 | 4.8450 | 0.2680 |
| 0.0958 | 391.0 | 40273 | 5.1279 | 0.2680 |
| 0.0908 | 392.0 | 40376 | 4.9624 | 0.2921 |
| 0.0988 | 393.0 | 40479 | 5.1676 | 0.2955 |
| 0.0995 | 394.0 | 40582 | 4.8726 | 0.3058 |
| 0.1087 | 395.0 | 40685 | 4.9525 | 0.2818 |
| 0.11 | 396.0 | 40788 | 5.0258 | 0.2543 |
| 0.0916 | 397.0 | 40891 | 5.0114 | 0.3265 |
| 0.089 | 398.0 | 40994 | 4.9689 | 0.3058 |
| 0.1089 | 399.0 | 41097 | 4.8648 | 0.3058 |
| 0.085 | 400.0 | 41200 | 4.7376 | 0.2990 |
| 0.1135 | 401.0 | 41303 | 4.9685 | 0.2955 |
| 0.1032 | 402.0 | 41406 | 4.6955 | 0.3162 |
| 0.0987 | 403.0 | 41509 | 4.8972 | 0.2990 |
| 0.1112 | 404.0 | 41612 | 4.8028 | 0.2887 |
| 0.0926 | 405.0 | 41715 | 4.6858 | 0.3265 |
| 0.1032 | 406.0 | 41818 | 4.7680 | 0.3127 |
| 0.1066 | 407.0 | 41921 | 4.8087 | 0.2887 |
| 0.1053 | 408.0 | 42024 | 4.8871 | 0.2852 |
| 0.0999 | 409.0 | 42127 | 4.7056 | 0.2818 |
| 0.0929 | 410.0 | 42230 | 4.8846 | 0.2852 |
| 0.1138 | 411.0 | 42333 | 4.7741 | 0.2990 |
| 0.1126 | 412.0 | 42436 | 4.9157 | 0.2887 |
| 0.0835 | 413.0 | 42539 | 4.9607 | 0.2784 |
| 0.1004 | 414.0 | 42642 | 4.7718 | 0.3024 |
| 0.0972 | 415.0 | 42745 | 4.8288 | 0.3058 |
| 0.1023 | 416.0 | 42848 | 4.9083 | 0.2646 |
| 0.0948 | 417.0 | 42951 | 4.8509 | 0.2887 |
| 0.0918 | 418.0 | 43054 | 4.8323 | 0.2715 |
| 0.0961 | 419.0 | 43157 | 4.9570 | 0.2818 |
| 0.0911 | 420.0 | 43260 | 4.9581 | 0.2680 |
| 0.0927 | 421.0 | 43363 | 4.9856 | 0.2852 |
| 0.0907 | 422.0 | 43466 | 4.9146 | 0.2818 |
| 0.1039 | 423.0 | 43569 | 4.7813 | 0.2818 |
| 0.1093 | 424.0 | 43672 | 4.9574 | 0.3024 |
| 0.0859 | 425.0 | 43775 | 4.8934 | 0.2818 |
| 0.111 | 426.0 | 43878 | 4.8562 | 0.2887 |
| 0.0944 | 427.0 | 43981 | 4.8261 | 0.3058 |
| 0.1 | 428.0 | 44084 | 4.8226 | 0.2990 |
| 0.0965 | 429.0 | 44187 | 4.8104 | 0.3127 |
| 0.0905 | 430.0 | 44290 | 4.7416 | 0.3058 |
| 0.1095 | 431.0 | 44393 | 5.0877 | 0.2715 |
| 0.0855 | 432.0 | 44496 | 4.9392 | 0.2784 |
| 0.1079 | 433.0 | 44599 | 4.8227 | 0.3024 |
| 0.102 | 434.0 | 44702 | 4.9779 | 0.2784 |
| 0.0888 | 435.0 | 44805 | 4.9958 | 0.2955 |
| 0.0842 | 436.0 | 44908 | 4.7461 | 0.3093 |
| 0.0918 | 437.0 | 45011 | 5.0597 | 0.2646 |
| 0.0911 | 438.0 | 45114 | 4.9771 | 0.2784 |
| 0.0859 | 439.0 | 45217 | 4.8373 | 0.2990 |
| 0.0916 | 440.0 | 45320 | 4.7408 | 0.3093 |
| 0.0988 | 441.0 | 45423 | 4.7879 | 0.2612 |
| 0.0994 | 442.0 | 45526 | 4.7355 | 0.2990 |
| 0.102 | 443.0 | 45629 | 4.8696 | 0.3196 |
| 0.0951 | 444.0 | 45732 | 4.9578 | 0.2955 |
| 0.0843 | 445.0 | 45835 | 5.0340 | 0.3093 |
| 0.0927 | 446.0 | 45938 | 5.0122 | 0.3058 |
| 0.1028 | 447.0 | 46041 | 4.8365 | 0.2887 |
| 0.0988 | 448.0 | 46144 | 4.9790 | 0.2543 |
| 0.0993 | 449.0 | 46247 | 4.8574 | 0.2818 |
| 0.0935 | 450.0 | 46350 | 5.0489 | 0.2784 |
| 0.0942 | 451.0 | 46453 | 4.9593 | 0.2715 |
| 0.0875 | 452.0 | 46556 | 4.9571 | 0.2887 |
| 0.0968 | 453.0 | 46659 | 4.8004 | 0.3058 |
| 0.0969 | 454.0 | 46762 | 5.1910 | 0.2852 |
| 0.0954 | 455.0 | 46865 | 5.0355 | 0.2784 |
| 0.1008 | 456.0 | 46968 | 4.8536 | 0.2990 |
| 0.09 | 457.0 | 47071 | 4.7043 | 0.2715 |
| 0.1064 | 458.0 | 47174 | 4.8734 | 0.2749 |
| 0.0902 | 459.0 | 47277 | 4.9062 | 0.3299 |
| 0.0831 | 460.0 | 47380 | 5.0669 | 0.3058 |
| 0.1008 | 461.0 | 47483 | 5.1403 | 0.2784 |
| 0.0883 | 462.0 | 47586 | 5.1774 | 0.2818 |
| 0.0915 | 463.0 | 47689 | 5.1486 | 0.2852 |
| 0.1124 | 464.0 | 47792 | 5.1076 | 0.3093 |
| 0.0892 | 465.0 | 47895 | 5.0262 | 0.2784 |
| 0.088 | 466.0 | 47998 | 5.1672 | 0.2749 |
| 0.0969 | 467.0 | 48101 | 5.1796 | 0.2784 |
| 0.0851 | 468.0 | 48204 | 5.1422 | 0.2646 |
| 0.094 | 469.0 | 48307 | 5.1663 | 0.2509 |
| 0.085 | 470.0 | 48410 | 5.2027 | 0.2715 |
| 0.0953 | 471.0 | 48513 | 5.0788 | 0.2955 |
| 0.097 | 472.0 | 48616 | 5.1568 | 0.2680 |
| 0.092 | 473.0 | 48719 | 5.0175 | 0.2749 |
| 0.0876 | 474.0 | 48822 | 5.0064 | 0.2852 |
| 0.0984 | 475.0 | 48925 | 4.9885 | 0.2784 |
| 0.0781 | 476.0 | 49028 | 5.1671 | 0.2715 |
| 0.1001 | 477.0 | 49131 | 5.2429 | 0.2749 |
| 0.085 | 478.0 | 49234 | 5.2670 | 0.2749 |
| 0.0924 | 479.0 | 49337 | 5.0759 | 0.2784 |
| 0.0855 | 480.0 | 49440 | 5.2673 | 0.2955 |
| 0.1018 | 481.0 | 49543 | 5.1715 | 0.3127 |
| 0.0883 | 482.0 | 49646 | 5.0860 | 0.2887 |
| 0.101 | 483.0 | 49749 | 5.1873 | 0.2818 |
| 0.1061 | 484.0 | 49852 | 5.1156 | 0.2852 |
| 0.1091 | 485.0 | 49955 | 5.1338 | 0.2887 |
| 0.0935 | 486.0 | 50058 | 5.0872 | 0.2680 |
| 0.0983 | 487.0 | 50161 | 5.0349 | 0.2818 |
| 0.0955 | 488.0 | 50264 | 5.1492 | 0.2955 |
| 0.1065 | 489.0 | 50367 | 5.0529 | 0.2749 |
| 0.0771 | 490.0 | 50470 | 5.0177 | 0.2818 |
| 0.0962 | 491.0 | 50573 | 5.0682 | 0.2887 |
| 0.0701 | 492.0 | 50676 | 5.1446 | 0.2852 |
| 0.0908 | 493.0 | 50779 | 5.1319 | 0.2955 |
| 0.0957 | 494.0 | 50882 | 5.1732 | 0.2543 |
| 0.1039 | 495.0 | 50985 | 5.1408 | 0.2715 |
| 0.0947 | 496.0 | 51088 | 5.1906 | 0.2680 |
| 0.097 | 497.0 | 51191 | 5.3184 | 0.2405 |
| 0.0848 | 498.0 | 51294 | 5.1346 | 0.2921 |
| 0.0855 | 499.0 | 51397 | 5.0153 | 0.2784 |
| 0.1041 | 500.0 | 51500 | 5.1230 | 0.2612 |
| 0.0936 | 501.0 | 51603 | 5.1331 | 0.2715 |
| 0.0934 | 502.0 | 51706 | 5.1767 | 0.2612 |
| 0.0966 | 503.0 | 51809 | 5.0495 | 0.2921 |
| 0.0953 | 504.0 | 51912 | 5.0618 | 0.2543 |
| 0.0852 | 505.0 | 52015 | 5.1167 | 0.2818 |
| 0.0889 | 506.0 | 52118 | 5.0981 | 0.3058 |
| 0.0854 | 507.0 | 52221 | 5.1853 | 0.2955 |
| 0.0877 | 508.0 | 52324 | 5.2161 | 0.2887 |
| 0.1074 | 509.0 | 52427 | 5.1670 | 0.2646 |
| 0.1055 | 510.0 | 52530 | 5.0545 | 0.2749 |
| 0.0789 | 511.0 | 52633 | 5.0691 | 0.2509 |
| 0.0816 | 512.0 | 52736 | 5.0847 | 0.2887 |
| 0.0818 | 513.0 | 52839 | 5.1307 | 0.3024 |
| 0.0999 | 514.0 | 52942 | 5.1029 | 0.2852 |
| 0.0787 | 515.0 | 53045 | 5.2270 | 0.2955 |
| 0.0892 | 516.0 | 53148 | 5.1925 | 0.3024 |
| 0.0995 | 517.0 | 53251 | 5.2463 | 0.2955 |
| 0.0812 | 518.0 | 53354 | 5.3743 | 0.2955 |
| 0.101 | 519.0 | 53457 | 5.1906 | 0.2852 |
| 0.082 | 520.0 | 53560 | 5.1656 | 0.2887 |
| 0.0904 | 521.0 | 53663 | 5.1051 | 0.2921 |
| 0.0909 | 522.0 | 53766 | 5.2543 | 0.2990 |
| 0.1033 | 523.0 | 53869 | 5.2171 | 0.2784 |
| 0.0793 | 524.0 | 53972 | 5.2428 | 0.2955 |
| 0.0879 | 525.0 | 54075 | 5.3480 | 0.2955 |
| 0.0836 | 526.0 | 54178 | 5.2810 | 0.2784 |
| 0.0886 | 527.0 | 54281 | 5.2532 | 0.2955 |
| 0.0881 | 528.0 | 54384 | 5.4993 | 0.2646 |
| 0.1158 | 529.0 | 54487 | 5.2754 | 0.2749 |
| 0.0984 | 530.0 | 54590 | 5.2237 | 0.2509 |
| 0.0974 | 531.0 | 54693 | 5.4133 | 0.2715 |
| 0.0892 | 532.0 | 54796 | 5.2500 | 0.2852 |
| 0.0892 | 533.0 | 54899 | 5.3204 | 0.2612 |
| 0.0873 | 534.0 | 55002 | 5.2275 | 0.2749 |
| 0.0882 | 535.0 | 55105 | 5.2049 | 0.2921 |
| 0.0915 | 536.0 | 55208 | 5.2155 | 0.2990 |
| 0.0759 | 537.0 | 55311 | 5.2795 | 0.2818 |
| 0.0893 | 538.0 | 55414 | 5.2271 | 0.2852 |
| 0.0845 | 539.0 | 55517 | 5.2346 | 0.2680 |
| 0.0912 | 540.0 | 55620 | 5.2443 | 0.3093 |
| 0.0804 | 541.0 | 55723 | 5.2777 | 0.2921 |
| 0.0753 | 542.0 | 55826 | 5.3583 | 0.2680 |
| 0.0829 | 543.0 | 55929 | 5.1900 | 0.2852 |
| 0.0984 | 544.0 | 56032 | 5.1930 | 0.2990 |
| 0.0993 | 545.0 | 56135 | 5.1223 | 0.3093 |
| 0.0793 | 546.0 | 56238 | 5.2101 | 0.3024 |
| 0.0912 | 547.0 | 56341 | 5.2742 | 0.2749 |
| 0.0892 | 548.0 | 56444 | 5.1734 | 0.2921 |
| 0.1029 | 549.0 | 56547 | 5.2658 | 0.2921 |
| 0.0863 | 550.0 | 56650 | 5.2372 | 0.2990 |
| 0.1017 | 551.0 | 56753 | 5.2105 | 0.2680 |
| 0.0883 | 552.0 | 56856 | 5.1055 | 0.2955 |
| 0.1042 | 553.0 | 56959 | 5.2432 | 0.2612 |
| 0.0817 | 554.0 | 57062 | 5.2423 | 0.2921 |
| 0.0869 | 555.0 | 57165 | 5.2250 | 0.2784 |
| 0.0843 | 556.0 | 57268 | 5.1962 | 0.2887 |
| 0.0887 | 557.0 | 57371 | 5.1148 | 0.2990 |
| 0.0838 | 558.0 | 57474 | 5.0202 | 0.2852 |
| 0.0759 | 559.0 | 57577 | 5.0678 | 0.3265 |
| 0.0934 | 560.0 | 57680 | 4.9558 | 0.3265 |
| 0.0858 | 561.0 | 57783 | 5.0168 | 0.3093 |
| 0.0873 | 562.0 | 57886 | 5.0457 | 0.3058 |
| 0.0902 | 563.0 | 57989 | 5.0469 | 0.3230 |
| 0.0793 | 564.0 | 58092 | 4.9871 | 0.3265 |
| 0.0882 | 565.0 | 58195 | 5.1584 | 0.3162 |
| 0.0984 | 566.0 | 58298 | 5.0747 | 0.3230 |
| 0.0824 | 567.0 | 58401 | 5.1735 | 0.3196 |
| 0.0794 | 568.0 | 58504 | 5.1323 | 0.3265 |
| 0.0847 | 569.0 | 58607 | 5.1292 | 0.3230 |
| 0.0833 | 570.0 | 58710 | 5.0710 | 0.3265 |
| 0.0831 | 571.0 | 58813 | 5.1205 | 0.2955 |
| 0.0922 | 572.0 | 58916 | 5.1007 | 0.2990 |
| 0.0906 | 573.0 | 59019 | 5.1924 | 0.2955 |
| 0.1079 | 574.0 | 59122 | 5.1933 | 0.2955 |
| 0.0943 | 575.0 | 59225 | 5.1558 | 0.3024 |
| 0.0877 | 576.0 | 59328 | 5.1573 | 0.2990 |
| 0.0977 | 577.0 | 59431 | 5.0311 | 0.2990 |
| 0.0751 | 578.0 | 59534 | 5.1581 | 0.2887 |
| 0.096 | 579.0 | 59637 | 5.2115 | 0.2818 |
| 0.0902 | 580.0 | 59740 | 5.2544 | 0.2921 |
| 0.1052 | 581.0 | 59843 | 5.1612 | 0.3196 |
| 0.0763 | 582.0 | 59946 | 5.1434 | 0.2921 |
| 0.0904 | 583.0 | 60049 | 5.1911 | 0.2955 |
| 0.0868 | 584.0 | 60152 | 5.1716 | 0.3024 |
| 0.091 | 585.0 | 60255 | 5.1767 | 0.2818 |
| 0.0936 | 586.0 | 60358 | 5.1801 | 0.2852 |
| 0.082 | 587.0 | 60461 | 5.0496 | 0.2852 |
| 0.0999 | 588.0 | 60564 | 5.2585 | 0.2852 |
| 0.0826 | 589.0 | 60667 | 5.2566 | 0.2887 |
| 0.0949 | 590.0 | 60770 | 5.3015 | 0.2990 |
| 0.0828 | 591.0 | 60873 | 5.1411 | 0.3093 |
| 0.0827 | 592.0 | 60976 | 5.1199 | 0.3024 |
| 0.0943 | 593.0 | 61079 | 5.1063 | 0.3024 |
| 0.076 | 594.0 | 61182 | 5.1141 | 0.3093 |
| 0.0917 | 595.0 | 61285 | 5.1414 | 0.2990 |
| 0.0976 | 596.0 | 61388 | 5.1441 | 0.2955 |
| 0.0804 | 597.0 | 61491 | 5.1681 | 0.3024 |
| 0.0923 | 598.0 | 61594 | 5.1333 | 0.3024 |
| 0.093 | 599.0 | 61697 | 5.1260 | 0.2921 |
| 0.0926 | 600.0 | 61800 | 5.1560 | 0.3196 |
| 0.0844 | 601.0 | 61903 | 5.1931 | 0.2990 |
| 0.0847 | 602.0 | 62006 | 5.0865 | 0.3024 |
| 0.0822 | 603.0 | 62109 | 5.0862 | 0.3127 |
| 0.0771 | 604.0 | 62212 | 5.0475 | 0.3058 |
| 0.0885 | 605.0 | 62315 | 5.0884 | 0.3093 |
| 0.0809 | 606.0 | 62418 | 5.2159 | 0.2921 |
| 0.0892 | 607.0 | 62521 | 5.0867 | 0.3093 |
| 0.085 | 608.0 | 62624 | 5.0848 | 0.3058 |
| 0.0828 | 609.0 | 62727 | 5.2343 | 0.3093 |
| 0.0978 | 610.0 | 62830 | 5.1203 | 0.2921 |
| 0.0922 | 611.0 | 62933 | 5.2543 | 0.2921 |
| 0.091 | 612.0 | 63036 | 5.1228 | 0.2784 |
| 0.0926 | 613.0 | 63139 | 5.3064 | 0.2887 |
| 0.078 | 614.0 | 63242 | 5.3367 | 0.2921 |
| 0.0791 | 615.0 | 63345 | 5.2738 | 0.3058 |
| 0.0803 | 616.0 | 63448 | 5.2698 | 0.2990 |
| 0.0936 | 617.0 | 63551 | 5.3062 | 0.3162 |
| 0.0894 | 618.0 | 63654 | 5.3834 | 0.2990 |
| 0.0794 | 619.0 | 63757 | 5.2768 | 0.3196 |
| 0.0885 | 620.0 | 63860 | 5.2569 | 0.2990 |
| 0.0866 | 621.0 | 63963 | 5.3325 | 0.2955 |
| 0.079 | 622.0 | 64066 | 5.2798 | 0.2887 |
| 0.084 | 623.0 | 64169 | 5.4603 | 0.2715 |
| 0.0886 | 624.0 | 64272 | 5.2922 | 0.2784 |
| 0.0726 | 625.0 | 64375 | 5.1952 | 0.2921 |
| 0.0893 | 626.0 | 64478 | 5.4114 | 0.2543 |
| 0.0881 | 627.0 | 64581 | 5.4867 | 0.2509 |
| 0.079 | 628.0 | 64684 | 5.4838 | 0.2887 |
| 0.0933 | 629.0 | 64787 | 5.5214 | 0.2921 |
| 0.0795 | 630.0 | 64890 | 5.4256 | 0.2818 |
| 0.0882 | 631.0 | 64993 | 5.3628 | 0.2818 |
| 0.0826 | 632.0 | 65096 | 5.2816 | 0.2921 |
| 0.0853 | 633.0 | 65199 | 5.2615 | 0.2749 |
| 0.0862 | 634.0 | 65302 | 5.2622 | 0.2955 |
| 0.0823 | 635.0 | 65405 | 5.3123 | 0.2955 |
| 0.0915 | 636.0 | 65508 | 5.2486 | 0.2852 |
| 0.0776 | 637.0 | 65611 | 5.2641 | 0.2955 |
| 0.0799 | 638.0 | 65714 | 5.4327 | 0.2887 |
| 0.0925 | 639.0 | 65817 | 5.3664 | 0.2852 |
| 0.0865 | 640.0 | 65920 | 5.3066 | 0.2990 |
| 0.09 | 641.0 | 66023 | 5.0985 | 0.3127 |
| 0.0867 | 642.0 | 66126 | 5.1732 | 0.2955 |
| 0.084 | 643.0 | 66229 | 5.2330 | 0.3127 |
| 0.0806 | 644.0 | 66332 | 5.2097 | 0.3162 |
| 0.0821 | 645.0 | 66435 | 5.3272 | 0.2990 |
| 0.0869 | 646.0 | 66538 | 5.3930 | 0.3024 |
| 0.0777 | 647.0 | 66641 | 5.3346 | 0.2990 |
| 0.0822 | 648.0 | 66744 | 5.2165 | 0.2990 |
| 0.0967 | 649.0 | 66847 | 5.2284 | 0.3024 |
| 0.0792 | 650.0 | 66950 | 5.3921 | 0.3024 |
| 0.0849 | 651.0 | 67053 | 5.5296 | 0.2749 |
| 0.0854 | 652.0 | 67156 | 5.4795 | 0.2852 |
| 0.0796 | 653.0 | 67259 | 5.3334 | 0.2784 |
| 0.093 | 654.0 | 67362 | 5.3140 | 0.3058 |
| 0.076 | 655.0 | 67465 | 5.3064 | 0.2887 |
| 0.086 | 656.0 | 67568 | 5.3858 | 0.2990 |
| 0.0856 | 657.0 | 67671 | 5.3206 | 0.2887 |
| 0.0826 | 658.0 | 67774 | 5.2731 | 0.2852 |
| 0.0972 | 659.0 | 67877 | 5.3104 | 0.2921 |
| 0.0828 | 660.0 | 67980 | 5.3299 | 0.2955 |
| 0.0792 | 661.0 | 68083 | 5.4611 | 0.2818 |
| 0.0839 | 662.0 | 68186 | 5.4076 | 0.2749 |
| 0.0816 | 663.0 | 68289 | 5.3335 | 0.2852 |
| 0.0786 | 664.0 | 68392 | 5.3885 | 0.2577 |
| 0.0958 | 665.0 | 68495 | 5.4822 | 0.2543 |
| 0.0872 | 666.0 | 68598 | 5.4748 | 0.2784 |
| 0.0823 | 667.0 | 68701 | 5.3412 | 0.2887 |
| 0.0845 | 668.0 | 68804 | 5.2716 | 0.2955 |
| 0.0882 | 669.0 | 68907 | 5.4058 | 0.2818 |
| 0.0794 | 670.0 | 69010 | 5.5217 | 0.2543 |
| 0.0876 | 671.0 | 69113 | 5.3548 | 0.2784 |
| 0.0754 | 672.0 | 69216 | 5.3593 | 0.2921 |
| 0.0842 | 673.0 | 69319 | 5.4261 | 0.2680 |
| 0.0832 | 674.0 | 69422 | 5.3608 | 0.2887 |
| 0.0874 | 675.0 | 69525 | 5.4222 | 0.2784 |
| 0.0822 | 676.0 | 69628 | 5.2592 | 0.3058 |
| 0.0852 | 677.0 | 69731 | 5.2905 | 0.2921 |
| 0.0819 | 678.0 | 69834 | 5.2874 | 0.2955 |
| 0.0842 | 679.0 | 69937 | 5.5141 | 0.2887 |
| 0.0871 | 680.0 | 70040 | 5.3684 | 0.2990 |
| 0.0756 | 681.0 | 70143 | 5.4528 | 0.2887 |
| 0.0844 | 682.0 | 70246 | 5.3712 | 0.2818 |
| 0.0774 | 683.0 | 70349 | 5.3621 | 0.2818 |
| 0.0914 | 684.0 | 70452 | 5.3721 | 0.3024 |
| 0.0883 | 685.0 | 70555 | 5.2809 | 0.2955 |
| 0.0812 | 686.0 | 70658 | 5.3432 | 0.2955 |
| 0.0838 | 687.0 | 70761 | 5.3131 | 0.3162 |
| 0.081 | 688.0 | 70864 | 5.3051 | 0.3058 |
| 0.0785 | 689.0 | 70967 | 5.2396 | 0.3024 |
| 0.0842 | 690.0 | 71070 | 5.2475 | 0.2818 |
| 0.0956 | 691.0 | 71173 | 5.3493 | 0.3058 |
| 0.0823 | 692.0 | 71276 | 5.2118 | 0.3127 |
| 0.0841 | 693.0 | 71379 | 5.1624 | 0.3127 |
| 0.078 | 694.0 | 71482 | 5.2229 | 0.3162 |
| 0.0831 | 695.0 | 71585 | 5.2669 | 0.3127 |
| 0.0863 | 696.0 | 71688 | 5.2763 | 0.3024 |
| 0.0957 | 697.0 | 71791 | 5.3014 | 0.3333 |
| 0.0775 | 698.0 | 71894 | 5.3820 | 0.2990 |
| 0.0907 | 699.0 | 71997 | 5.4359 | 0.3127 |
| 0.0802 | 700.0 | 72100 | 5.4012 | 0.3058 |
| 0.0799 | 701.0 | 72203 | 5.3790 | 0.2784 |
| 0.0822 | 702.0 | 72306 | 5.3593 | 0.2955 |
| 0.0841 | 703.0 | 72409 | 5.3180 | 0.2990 |
| 0.0883 | 704.0 | 72512 | 5.2755 | 0.3024 |
| 0.0863 | 705.0 | 72615 | 5.2439 | 0.3024 |
| 0.0776 | 706.0 | 72718 | 5.2928 | 0.2887 |
| 0.0854 | 707.0 | 72821 | 5.3421 | 0.2749 |
| 0.0853 | 708.0 | 72924 | 5.3366 | 0.2852 |
| 0.0864 | 709.0 | 73027 | 5.3050 | 0.2990 |
| 0.0802 | 710.0 | 73130 | 5.3095 | 0.3024 |
| 0.0868 | 711.0 | 73233 | 5.3088 | 0.2921 |
| 0.0817 | 712.0 | 73336 | 5.2846 | 0.2955 |
| 0.0848 | 713.0 | 73439 | 5.3219 | 0.2612 |
| 0.0891 | 714.0 | 73542 | 5.3707 | 0.2646 |
| 0.0829 | 715.0 | 73645 | 5.3405 | 0.2852 |
| 0.0882 | 716.0 | 73748 | 5.1875 | 0.3024 |
| 0.0944 | 717.0 | 73851 | 5.2667 | 0.2921 |
| 0.0713 | 718.0 | 73954 | 5.2920 | 0.2818 |
| 0.0855 | 719.0 | 74057 | 5.1722 | 0.2955 |
| 0.0812 | 720.0 | 74160 | 5.1372 | 0.2921 |
| 0.0731 | 721.0 | 74263 | 5.1013 | 0.2921 |
| 0.0845 | 722.0 | 74366 | 5.1055 | 0.2990 |
| 0.0857 | 723.0 | 74469 | 5.2164 | 0.2921 |
| 0.0843 | 724.0 | 74572 | 5.3023 | 0.2852 |
| 0.084 | 725.0 | 74675 | 5.1233 | 0.3127 |
| 0.0846 | 726.0 | 74778 | 5.3163 | 0.2680 |
| 0.0838 | 727.0 | 74881 | 5.2244 | 0.2749 |
| 0.0815 | 728.0 | 74984 | 5.1616 | 0.2784 |
| 0.0849 | 729.0 | 75087 | 5.1514 | 0.2955 |
| 0.0818 | 730.0 | 75190 | 5.1428 | 0.2990 |
| 0.0751 | 731.0 | 75293 | 5.1820 | 0.2749 |
| 0.0766 | 732.0 | 75396 | 5.2326 | 0.2749 |
| 0.0772 | 733.0 | 75499 | 5.2083 | 0.2955 |
| 0.0846 | 734.0 | 75602 | 5.3257 | 0.2887 |
| 0.0811 | 735.0 | 75705 | 5.3460 | 0.2784 |
| 0.089 | 736.0 | 75808 | 5.3004 | 0.2852 |
| 0.0711 | 737.0 | 75911 | 5.2424 | 0.2955 |
| 0.0852 | 738.0 | 76014 | 5.3143 | 0.2612 |
| 0.0798 | 739.0 | 76117 | 5.3268 | 0.2646 |
| 0.0783 | 740.0 | 76220 | 5.2696 | 0.2921 |
| 0.086 | 741.0 | 76323 | 5.2744 | 0.2749 |
| 0.0778 | 742.0 | 76426 | 5.3274 | 0.2818 |
| 0.0832 | 743.0 | 76529 | 5.3297 | 0.2852 |
| 0.0826 | 744.0 | 76632 | 5.2858 | 0.2990 |
| 0.0792 | 745.0 | 76735 | 5.3368 | 0.2852 |
| 0.0787 | 746.0 | 76838 | 5.3574 | 0.2749 |
| 0.0732 | 747.0 | 76941 | 5.3469 | 0.2852 |
| 0.0857 | 748.0 | 77044 | 5.2975 | 0.2955 |
| 0.07 | 749.0 | 77147 | 5.3372 | 0.2784 |
| 0.0829 | 750.0 | 77250 | 5.2525 | 0.2921 |
| 0.0794 | 751.0 | 77353 | 5.3314 | 0.2852 |
| 0.0781 | 752.0 | 77456 | 5.3318 | 0.2715 |
| 0.0914 | 753.0 | 77559 | 5.2651 | 0.2715 |
| 0.0822 | 754.0 | 77662 | 5.3557 | 0.2852 |
| 0.0782 | 755.0 | 77765 | 5.4120 | 0.2818 |
| 0.0828 | 756.0 | 77868 | 5.4191 | 0.2921 |
| 0.0747 | 757.0 | 77971 | 5.4100 | 0.3058 |
| 0.0765 | 758.0 | 78074 | 5.3832 | 0.3024 |
| 0.077 | 759.0 | 78177 | 5.3801 | 0.2955 |
| 0.0751 | 760.0 | 78280 | 5.3274 | 0.3058 |
| 0.0821 | 761.0 | 78383 | 5.3911 | 0.2955 |
| 0.0854 | 762.0 | 78486 | 5.4113 | 0.3093 |
| 0.0765 | 763.0 | 78589 | 5.3642 | 0.3024 |
| 0.0787 | 764.0 | 78692 | 5.3545 | 0.2887 |
| 0.0842 | 765.0 | 78795 | 5.3986 | 0.2990 |
| 0.0856 | 766.0 | 78898 | 5.4038 | 0.2887 |
| 0.082 | 767.0 | 79001 | 5.3815 | 0.3058 |
| 0.0787 | 768.0 | 79104 | 5.4093 | 0.2852 |
| 0.0731 | 769.0 | 79207 | 5.3961 | 0.2955 |
| 0.0762 | 770.0 | 79310 | 5.3746 | 0.3093 |
| 0.0874 | 771.0 | 79413 | 5.3983 | 0.3058 |
| 0.0835 | 772.0 | 79516 | 5.4264 | 0.2887 |
| 0.0841 | 773.0 | 79619 | 5.4252 | 0.2990 |
| 0.0792 | 774.0 | 79722 | 5.3730 | 0.3058 |
| 0.0816 | 775.0 | 79825 | 5.3834 | 0.3127 |
| 0.0928 | 776.0 | 79928 | 5.4694 | 0.2887 |
| 0.0739 | 777.0 | 80031 | 5.3801 | 0.2887 |
| 0.0778 | 778.0 | 80134 | 5.3827 | 0.2818 |
| 0.0826 | 779.0 | 80237 | 5.4980 | 0.2887 |
| 0.0873 | 780.0 | 80340 | 5.3884 | 0.2749 |
| 0.0762 | 781.0 | 80443 | 5.3831 | 0.2887 |
| 0.0802 | 782.0 | 80546 | 5.4449 | 0.2852 |
| 0.0832 | 783.0 | 80649 | 5.4030 | 0.2921 |
| 0.0716 | 784.0 | 80752 | 5.4508 | 0.2955 |
| 0.0885 | 785.0 | 80855 | 5.3869 | 0.2887 |
| 0.0685 | 786.0 | 80958 | 5.3692 | 0.2990 |
| 0.0797 | 787.0 | 81061 | 5.3884 | 0.3024 |
| 0.0748 | 788.0 | 81164 | 5.3263 | 0.3162 |
| 0.0741 | 789.0 | 81267 | 5.3524 | 0.3024 |
| 0.0767 | 790.0 | 81370 | 5.2625 | 0.3230 |
| 0.0814 | 791.0 | 81473 | 5.2668 | 0.3299 |
| 0.0845 | 792.0 | 81576 | 5.2356 | 0.3093 |
| 0.076 | 793.0 | 81679 | 5.2616 | 0.3230 |
| 0.0769 | 794.0 | 81782 | 5.3046 | 0.3333 |
| 0.0866 | 795.0 | 81885 | 5.2902 | 0.3299 |
| 0.0772 | 796.0 | 81988 | 5.3078 | 0.3127 |
| 0.079 | 797.0 | 82091 | 5.2889 | 0.2955 |
| 0.0797 | 798.0 | 82194 | 5.2158 | 0.2990 |
| 0.0802 | 799.0 | 82297 | 5.3130 | 0.3024 |
| 0.0859 | 800.0 | 82400 | 5.2843 | 0.3162 |
| 0.0789 | 801.0 | 82503 | 5.2430 | 0.3127 |
| 0.0809 | 802.0 | 82606 | 5.2167 | 0.3436 |
| 0.0787 | 803.0 | 82709 | 5.2202 | 0.3127 |
| 0.0878 | 804.0 | 82812 | 5.3567 | 0.3024 |
| 0.0772 | 805.0 | 82915 | 5.3986 | 0.2887 |
| 0.0809 | 806.0 | 83018 | 5.3578 | 0.2887 |
| 0.0815 | 807.0 | 83121 | 5.3142 | 0.3093 |
| 0.0762 | 808.0 | 83224 | 5.2857 | 0.2955 |
| 0.0732 | 809.0 | 83327 | 5.2571 | 0.2955 |
| 0.0779 | 810.0 | 83430 | 5.2882 | 0.2887 |
| 0.0872 | 811.0 | 83533 | 5.3455 | 0.3024 |
| 0.076 | 812.0 | 83636 | 5.2805 | 0.2955 |
| 0.0894 | 813.0 | 83739 | 5.2921 | 0.2990 |
| 0.0724 | 814.0 | 83842 | 5.3510 | 0.2887 |
| 0.0828 | 815.0 | 83945 | 5.3011 | 0.3024 |
| 0.0818 | 816.0 | 84048 | 5.2944 | 0.3196 |
| 0.0728 | 817.0 | 84151 | 5.2526 | 0.3058 |
| 0.0776 | 818.0 | 84254 | 5.2646 | 0.2921 |
| 0.0768 | 819.0 | 84357 | 5.3151 | 0.3024 |
| 0.0725 | 820.0 | 84460 | 5.3043 | 0.3058 |
| 0.077 | 821.0 | 84563 | 5.3536 | 0.3024 |
| 0.0815 | 822.0 | 84666 | 5.3243 | 0.3162 |
| 0.0753 | 823.0 | 84769 | 5.3728 | 0.2990 |
| 0.0837 | 824.0 | 84872 | 5.3566 | 0.2852 |
| 0.0786 | 825.0 | 84975 | 5.3487 | 0.3058 |
| 0.0897 | 826.0 | 85078 | 5.3847 | 0.2955 |
| 0.079 | 827.0 | 85181 | 5.3576 | 0.2955 |
| 0.0791 | 828.0 | 85284 | 5.3439 | 0.2818 |
| 0.0778 | 829.0 | 85387 | 5.3457 | 0.2921 |
| 0.0732 | 830.0 | 85490 | 5.3470 | 0.2887 |
| 0.0752 | 831.0 | 85593 | 5.3294 | 0.2921 |
| 0.0823 | 832.0 | 85696 | 5.4163 | 0.2887 |
| 0.0803 | 833.0 | 85799 | 5.3962 | 0.3058 |
| 0.0792 | 834.0 | 85902 | 5.3944 | 0.3127 |
| 0.0701 | 835.0 | 86005 | 5.4105 | 0.3024 |
| 0.0853 | 836.0 | 86108 | 5.3402 | 0.3162 |
| 0.0753 | 837.0 | 86211 | 5.3846 | 0.3196 |
| 0.0867 | 838.0 | 86314 | 5.4029 | 0.3024 |
| 0.0722 | 839.0 | 86417 | 5.3613 | 0.3093 |
| 0.0686 | 840.0 | 86520 | 5.3966 | 0.3093 |
| 0.0891 | 841.0 | 86623 | 5.3980 | 0.2955 |
| 0.0826 | 842.0 | 86726 | 5.3373 | 0.3024 |
| 0.0767 | 843.0 | 86829 | 5.4020 | 0.2955 |
| 0.0816 | 844.0 | 86932 | 5.3813 | 0.2784 |
| 0.0775 | 845.0 | 87035 | 5.3968 | 0.2887 |
| 0.0694 | 846.0 | 87138 | 5.4287 | 0.2955 |
| 0.0816 | 847.0 | 87241 | 5.4425 | 0.2990 |
| 0.0697 | 848.0 | 87344 | 5.4049 | 0.3024 |
| 0.0771 | 849.0 | 87447 | 5.4044 | 0.2990 |
| 0.0712 | 850.0 | 87550 | 5.4029 | 0.2990 |
| 0.0806 | 851.0 | 87653 | 5.3960 | 0.2818 |
| 0.0766 | 852.0 | 87756 | 5.3878 | 0.2852 |
| 0.074 | 853.0 | 87859 | 5.4213 | 0.2749 |
| 0.0779 | 854.0 | 87962 | 5.4028 | 0.2784 |
| 0.084 | 855.0 | 88065 | 5.4720 | 0.2852 |
| 0.0757 | 856.0 | 88168 | 5.4470 | 0.2784 |
| 0.0763 | 857.0 | 88271 | 5.4431 | 0.2749 |
| 0.0816 | 858.0 | 88374 | 5.4127 | 0.2749 |
| 0.0761 | 859.0 | 88477 | 5.4201 | 0.2646 |
| 0.093 | 860.0 | 88580 | 5.3464 | 0.3024 |
| 0.0729 | 861.0 | 88683 | 5.3696 | 0.2852 |
| 0.0792 | 862.0 | 88786 | 5.3409 | 0.2990 |
| 0.0742 | 863.0 | 88889 | 5.3730 | 0.2818 |
| 0.0795 | 864.0 | 88992 | 5.4294 | 0.2818 |
| 0.0701 | 865.0 | 89095 | 5.4176 | 0.2715 |
| 0.087 | 866.0 | 89198 | 5.4339 | 0.2784 |
| 0.0775 | 867.0 | 89301 | 5.4669 | 0.2818 |
| 0.0764 | 868.0 | 89404 | 5.4774 | 0.2955 |
| 0.0827 | 869.0 | 89507 | 5.4227 | 0.2921 |
| 0.0757 | 870.0 | 89610 | 5.4220 | 0.3024 |
| 0.0761 | 871.0 | 89713 | 5.3954 | 0.2887 |
| 0.0777 | 872.0 | 89816 | 5.3860 | 0.3024 |
| 0.0737 | 873.0 | 89919 | 5.3625 | 0.2818 |
| 0.0777 | 874.0 | 90022 | 5.4137 | 0.2955 |
| 0.0758 | 875.0 | 90125 | 5.4152 | 0.2818 |
| 0.0764 | 876.0 | 90228 | 5.3812 | 0.2921 |
| 0.087 | 877.0 | 90331 | 5.3757 | 0.3024 |
| 0.0705 | 878.0 | 90434 | 5.3995 | 0.2852 |
| 0.0831 | 879.0 | 90537 | 5.3755 | 0.2852 |
| 0.0692 | 880.0 | 90640 | 5.3843 | 0.2852 |
| 0.0752 | 881.0 | 90743 | 5.3978 | 0.2852 |
| 0.0732 | 882.0 | 90846 | 5.3873 | 0.2887 |
| 0.0836 | 883.0 | 90949 | 5.3961 | 0.2818 |
| 0.0761 | 884.0 | 91052 | 5.4159 | 0.2887 |
| 0.082 | 885.0 | 91155 | 5.4183 | 0.2990 |
| 0.0729 | 886.0 | 91258 | 5.4438 | 0.2921 |
| 0.0908 | 887.0 | 91361 | 5.4588 | 0.2784 |
| 0.0677 | 888.0 | 91464 | 5.4840 | 0.2818 |
| 0.0821 | 889.0 | 91567 | 5.4664 | 0.2887 |
| 0.0812 | 890.0 | 91670 | 5.5019 | 0.2990 |
| 0.0849 | 891.0 | 91773 | 5.4783 | 0.3024 |
| 0.079 | 892.0 | 91876 | 5.4933 | 0.2818 |
| 0.0703 | 893.0 | 91979 | 5.5191 | 0.2921 |
| 0.0777 | 894.0 | 92082 | 5.5171 | 0.2921 |
| 0.0767 | 895.0 | 92185 | 5.5280 | 0.2818 |
| 0.0697 | 896.0 | 92288 | 5.4920 | 0.2887 |
| 0.0831 | 897.0 | 92391 | 5.4587 | 0.2887 |
| 0.0715 | 898.0 | 92494 | 5.4843 | 0.2887 |
| 0.0764 | 899.0 | 92597 | 5.5036 | 0.2921 |
| 0.0785 | 900.0 | 92700 | 5.4781 | 0.2921 |
| 0.0783 | 901.0 | 92803 | 5.4685 | 0.3058 |
| 0.0791 | 902.0 | 92906 | 5.4434 | 0.3093 |
| 0.0714 | 903.0 | 93009 | 5.4704 | 0.3093 |
| 0.0834 | 904.0 | 93112 | 5.4543 | 0.3058 |
| 0.0796 | 905.0 | 93215 | 5.4430 | 0.3093 |
| 0.0741 | 906.0 | 93318 | 5.4621 | 0.2990 |
| 0.0752 | 907.0 | 93421 | 5.4498 | 0.3024 |
| 0.0776 | 908.0 | 93524 | 5.4553 | 0.2955 |
| 0.0795 | 909.0 | 93627 | 5.4151 | 0.3024 |
| 0.0771 | 910.0 | 93730 | 5.3965 | 0.2990 |
| 0.0756 | 911.0 | 93833 | 5.4121 | 0.3058 |
| 0.0769 | 912.0 | 93936 | 5.4056 | 0.2990 |
| 0.0799 | 913.0 | 94039 | 5.3876 | 0.3024 |
| 0.0853 | 914.0 | 94142 | 5.4022 | 0.2990 |
| 0.0726 | 915.0 | 94245 | 5.4384 | 0.2852 |
| 0.0745 | 916.0 | 94348 | 5.4223 | 0.2955 |
| 0.0688 | 917.0 | 94451 | 5.4298 | 0.2887 |
| 0.0743 | 918.0 | 94554 | 5.4227 | 0.3024 |
| 0.0842 | 919.0 | 94657 | 5.3807 | 0.3093 |
| 0.0732 | 920.0 | 94760 | 5.3881 | 0.2990 |
| 0.0717 | 921.0 | 94863 | 5.3828 | 0.2990 |
| 0.084 | 922.0 | 94966 | 5.3770 | 0.3024 |
| 0.079 | 923.0 | 95069 | 5.3873 | 0.2887 |
| 0.0761 | 924.0 | 95172 | 5.3788 | 0.2921 |
| 0.0777 | 925.0 | 95275 | 5.3932 | 0.2921 |
| 0.0729 | 926.0 | 95378 | 5.4352 | 0.2921 |
| 0.0756 | 927.0 | 95481 | 5.4271 | 0.2921 |
| 0.0699 | 928.0 | 95584 | 5.4086 | 0.2955 |
| 0.0814 | 929.0 | 95687 | 5.4210 | 0.2784 |
| 0.07 | 930.0 | 95790 | 5.4176 | 0.2852 |
| 0.0736 | 931.0 | 95893 | 5.4347 | 0.2852 |
| 0.0694 | 932.0 | 95996 | 5.4364 | 0.2887 |
| 0.0771 | 933.0 | 96099 | 5.4468 | 0.2852 |
| 0.0718 | 934.0 | 96202 | 5.4523 | 0.2887 |
| 0.0784 | 935.0 | 96305 | 5.4216 | 0.2852 |
| 0.087 | 936.0 | 96408 | 5.4159 | 0.2818 |
| 0.0717 | 937.0 | 96511 | 5.4228 | 0.2852 |
| 0.0714 | 938.0 | 96614 | 5.4017 | 0.2852 |
| 0.0754 | 939.0 | 96717 | 5.4021 | 0.2852 |
| 0.0733 | 940.0 | 96820 | 5.3958 | 0.2852 |
| 0.0697 | 941.0 | 96923 | 5.3859 | 0.2887 |
| 0.082 | 942.0 | 97026 | 5.3714 | 0.2921 |
| 0.0696 | 943.0 | 97129 | 5.3697 | 0.2921 |
| 0.0719 | 944.0 | 97232 | 5.3969 | 0.2784 |
| 0.0772 | 945.0 | 97335 | 5.3958 | 0.2852 |
| 0.0759 | 946.0 | 97438 | 5.4128 | 0.2818 |
| 0.074 | 947.0 | 97541 | 5.4283 | 0.2818 |
| 0.0704 | 948.0 | 97644 | 5.4305 | 0.2818 |
| 0.069 | 949.0 | 97747 | 5.4300 | 0.2852 |
| 0.0701 | 950.0 | 97850 | 5.4446 | 0.2818 |
| 0.087 | 951.0 | 97953 | 5.4365 | 0.2818 |
| 0.0837 | 952.0 | 98056 | 5.4268 | 0.2921 |
| 0.0754 | 953.0 | 98159 | 5.4260 | 0.2955 |
| 0.0778 | 954.0 | 98262 | 5.4057 | 0.2955 |
| 0.0643 | 955.0 | 98365 | 5.3992 | 0.2921 |
| 0.0768 | 956.0 | 98468 | 5.3886 | 0.2990 |
| 0.0727 | 957.0 | 98571 | 5.3845 | 0.2990 |
| 0.0859 | 958.0 | 98674 | 5.3822 | 0.2955 |
| 0.0831 | 959.0 | 98777 | 5.3852 | 0.2955 |
| 0.0756 | 960.0 | 98880 | 5.3884 | 0.2955 |
| 0.0857 | 961.0 | 98983 | 5.3892 | 0.2955 |
| 0.0707 | 962.0 | 99086 | 5.3776 | 0.2921 |
| 0.0746 | 963.0 | 99189 | 5.3785 | 0.3058 |
| 0.0745 | 964.0 | 99292 | 5.3776 | 0.2921 |
| 0.0827 | 965.0 | 99395 | 5.3704 | 0.2887 |
| 0.0774 | 966.0 | 99498 | 5.3653 | 0.2921 |
| 0.0795 | 967.0 | 99601 | 5.3569 | 0.2887 |
| 0.0759 | 968.0 | 99704 | 5.3515 | 0.2887 |
| 0.0713 | 969.0 | 99807 | 5.3752 | 0.2955 |
| 0.0735 | 970.0 | 99910 | 5.3728 | 0.2955 |
| 0.0777 | 971.0 | 100013 | 5.3690 | 0.2955 |
| 0.0844 | 972.0 | 100116 | 5.3782 | 0.2921 |
| 0.0758 | 973.0 | 100219 | 5.3822 | 0.2921 |
| 0.0735 | 974.0 | 100322 | 5.3893 | 0.2852 |
| 0.0698 | 975.0 | 100425 | 5.3887 | 0.2818 |
| 0.0773 | 976.0 | 100528 | 5.3908 | 0.2852 |
| 0.0695 | 977.0 | 100631 | 5.3909 | 0.2887 |
| 0.0786 | 978.0 | 100734 | 5.3939 | 0.2921 |
| 0.0784 | 979.0 | 100837 | 5.3838 | 0.2921 |
| 0.078 | 980.0 | 100940 | 5.3891 | 0.2921 |
| 0.0721 | 981.0 | 101043 | 5.3875 | 0.2887 |
| 0.0779 | 982.0 | 101146 | 5.3925 | 0.2887 |
| 0.0706 | 983.0 | 101249 | 5.4006 | 0.2921 |
| 0.0808 | 984.0 | 101352 | 5.4022 | 0.2921 |
| 0.071 | 985.0 | 101455 | 5.4076 | 0.2921 |
| 0.0743 | 986.0 | 101558 | 5.4104 | 0.2921 |
| 0.0784 | 987.0 | 101661 | 5.4093 | 0.2921 |
| 0.0793 | 988.0 | 101764 | 5.4071 | 0.2921 |
| 0.0838 | 989.0 | 101867 | 5.4029 | 0.2921 |
| 0.0708 | 990.0 | 101970 | 5.4035 | 0.2921 |
| 0.0742 | 991.0 | 102073 | 5.4021 | 0.2921 |
| 0.0746 | 992.0 | 102176 | 5.4050 | 0.2921 |
| 0.0756 | 993.0 | 102279 | 5.4059 | 0.2921 |
| 0.0744 | 994.0 | 102382 | 5.4053 | 0.2921 |
| 0.0741 | 995.0 | 102485 | 5.4075 | 0.2921 |
| 0.0757 | 996.0 | 102588 | 5.4072 | 0.2921 |
| 0.0735 | 997.0 | 102691 | 5.4086 | 0.2921 |
| 0.0708 | 998.0 | 102794 | 5.4088 | 0.2921 |
| 0.0812 | 999.0 | 102897 | 5.4088 | 0.2921 |
| 0.0722 | 1000.0 | 103000 | 5.4090 | 0.2921 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.12.1
|
Waynehillsdev/Waynehills_summary_tensorflow
|
[
"tf",
"t5",
"text2text-generation",
"transformers",
"generated_from_keras_callback",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PixelCopter-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 35.70 +/- 28.83
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Waynehillsdev/waynehills_sentimental_kor
|
[
"pytorch",
"electra",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"ElectraForSequenceClassification"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 33 | null |
# Vocabulary Trimmed [lmqg/mbart-large-cc25-ruquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-ruquad-qg): `vocabtrimmer/mbart-large-cc25-ruquad-qg-trimmed-ru-5000`
This model is a trimmed version of [lmqg/mbart-large-cc25-ruquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-ruquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mbart-large-cc25-ruquad-qg | vocabtrimmer/mbart-large-cc25-ruquad-qg-trimmed-ru-5000 |
|:---------------------------|:----------------------------------|:----------------------------------------------------------|
| parameter_size_full | 610,852,864 | 359,949,312 |
| parameter_size_embedding | 512,057,344 | 10,250,240 |
| vocab_size | 250,028 | 5,005 |
| compression_rate_full | 100.0 | 58.93 |
| compression_rate_embedding | 100.0 | 2.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ru | vocabtrimmer/mc4_validation | text | ru | validation | 5000 | 2 |
|
Doohae/roberta
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Doquey/DialoGPT-small-Luisbot1
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.29 +/- 12.42
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Doquey/DialoGPT-small-Michaelbot
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
- polyglot-ko
- gpt-neox
- KoAlpaca
model-index:
- name: KoAlpaca-Polyglot-5.8B
results: []
language:
- ko
datasets:
- KoAlpaca-v1.1b
pipeline_tag: text-generation
---
# KoAlpaca-Polyglot-5.8B (v1.1b)
This model is a fine-tuned version of [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) on a KoAlpaca Dataset v1.1b
Detail Codes are available at [KoAlpaca Github Repository](https://github.com/Beomi/KoAlpaca)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Doxophobia/DialoGPT-medium-celeste
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="heziyevv/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-12
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 29 | null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dussinus/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-8
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30 | null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dussinus/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-slanted
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 29 | null |
Access to model pyoyoso/Python-Piscine is restricted and you are not in the authorized list. Visit https://huggingface.co/pyoyoso/Python-Piscine to ask for access.
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-100
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 28 | null |
---
tag: conversational
pipeline_tag: conversational
---
|
DoyyingFace/bert-asian-hate-tweets-asonam-clean
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 27 | null |
---
license: apache-2.0
pipeline_tag: question-answering
tags:
- question-answering
- transformers
- generated_from_trainer
datasets:
- squad_v2
- LLukas22/nq-simplified
- newsqa
- LLukas22/NLQuAD
- deepset/germanquad
language:
- en
- de
---
# all-MiniLM-L12-v2-qa-all
This model is an extractive qa model.
It's a fine-tuned version of [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) on the following datasets: [squad_v2](https://huggingface.co/datasets/squad_v2), [LLukas22/nq-simplified](https://huggingface.co/datasets/LLukas22/nq-simplified), [newsqa](https://huggingface.co/datasets/newsqa), [LLukas22/NLQuAD](https://huggingface.co/datasets/LLukas22/NLQuAD), [deepset/germanquad](https://huggingface.co/datasets/deepset/germanquad).
## Usage
You can use the model like this:
```python
from transformers import pipeline
#Make predictions
model_name = "LLukas22/all-MiniLM-L12-v2-qa-all"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
"question": "What's my name?",
"context": "My name is Clara and I live in Berkeley."
}
result = nlp(QA_input)
print(result)
```
Alternatively you can load the model and tokenizer on their own:
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
#Make predictions
model_name = "LLukas22/all-MiniLM-L12-v2-qa-all"
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2E-05
- per device batch size: 60
- effective batch size: 180
- seed: 42
- optimizer: AdamW with betas (0.9,0.999) and eps 1E-08
- weight decay: 1E-02
- D-Adaptation: False
- Warmup: True
- number of epochs: 15
- mixed_precision_training: bf16
## Training results
| Epoch | Train Loss | Validation Loss |
| ----- | ---------- | --------------- |
| 0 | 3.76 | 3.02 |
| 1 | 2.57 | 2.23 |
| 2 | 2.2 | 2.08 |
| 3 | 2.07 | 2.03 |
| 4 | 1.96 | 1.97 |
| 5 | 1.87 | 1.93 |
| 6 | 1.81 | 1.91 |
| 7 | 1.77 | 1.89 |
| 8 | 1.73 | 1.89 |
| 9 | 1.7 | 1.9 |
| 10 | 1.68 | 1.9 |
| 11 | 1.67 | 1.9 |
## Evaluation results
| Epoch | f1 | exact_match |
| ----- | ----- | ----- |
| 0 | 0.29 | 0.228 |
| 1 | 0.371 | 0.329 |
| 2 | 0.413 | 0.369 |
| 3 | 0.437 | 0.376 |
| 4 | 0.454 | 0.388 |
| 5 | 0.468 | 0.4 |
| 6 | 0.479 | 0.408 |
| 7 | 0.487 | 0.415 |
| 8 | 0.495 | 0.421 |
| 9 | 0.501 | 0.416 |
| 10 | 0.506 | 0.42 |
| 11 | 0.51 | 0.421 |
## Framework versions
- Transformers: 4.25.1
- PyTorch: 2.0.0.dev20230210+cu118
- PyTorch Lightning: 1.8.6
- Datasets: 2.7.1
- Tokenizers: 0.13.1
- Sentence Transformers: 2.2.2
## Additional Information
This model was trained as part of my Master's Thesis **'Evaluation of transformer based language models for use in service information systems'**. The source code is available on [Github](https://github.com/LLukas22/Retrieval-Augmented-QA).
|
DoyyingFace/bert-asian-hate-tweets-concat-clean-with-unclean-valid
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 25 | null |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: brand25/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DoyyingFace/bert-asian-hate-tweets-concat-clean
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 25 | null |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: astefani/ppo-Huggy-v2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
albert-large-v2
|
[
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26,792 | null |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: ecemisildar/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
albert-xlarge-v2
|
[
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2,973 | 2023-03-16T16:25:41Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1233.11 +/- 110.64
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
albert-xxlarge-v2
|
[
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 42,640 | null |
# Vocabulary Trimmed [lmqg/mbart-large-cc25-ruquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-ruquad-qg): `vocabtrimmer/mbart-large-cc25-ruquad-qg-trimmed-ru-10000`
This model is a trimmed version of [lmqg/mbart-large-cc25-ruquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-ruquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mbart-large-cc25-ruquad-qg | vocabtrimmer/mbart-large-cc25-ruquad-qg-trimmed-ru-10000 |
|:---------------------------|:----------------------------------|:-----------------------------------------------------------|
| parameter_size_full | 610,852,864 | 365,068,288 |
| parameter_size_embedding | 512,057,344 | 20,488,192 |
| vocab_size | 250,028 | 10,004 |
| compression_rate_full | 100.0 | 59.76 |
| compression_rate_embedding | 100.0 | 4.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ru | vocabtrimmer/mc4_validation | text | ru | validation | 10000 | 2 |
|
bert-base-cased-finetuned-mrpc
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11,644 | 2023-03-16T16:28:59Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 540.00 +/- 124.40
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sachaguer -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sachaguer -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sachaguer
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
bert-base-german-cased
|
[
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"exbert",
"license:mit",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 175,983 | null |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: ecemisildar/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bert-base-german-dbmdz-cased
|
[
"pytorch",
"jax",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,814 | 2023-03-16T16:34:01Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.60 +/- 5.95
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r NielsV/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
bert-base-german-dbmdz-uncased
|
[
"pytorch",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 68,305 | 2023-03-16T16:35:12Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: AraBERT-finetuned-fnd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AraBERT-finetuned-fnd
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5846
- Macro F1: 0.7751
- Accuracy: 0.7803
- Precision: 0.7740
- Recall: 0.7767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 25
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------:|:------:|
| 0.5538 | 1.0 | 1597 | 0.5104 | 0.7183 | 0.7352 | 0.7323 | 0.7142 |
| 0.4689 | 2.0 | 3194 | 0.4849 | 0.7435 | 0.7574 | 0.7551 | 0.7392 |
| 0.3876 | 3.0 | 4791 | 0.4828 | 0.7693 | 0.7747 | 0.7682 | 0.7708 |
| 0.3152 | 4.0 | 6388 | 0.5412 | 0.7702 | 0.7747 | 0.7686 | 0.7729 |
| 0.2627 | 5.0 | 7985 | 0.5846 | 0.7751 | 0.7803 | 0.7740 | 0.7767 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
bert-base-uncased
|
[
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 59,663,489 | 2023-03-16T16:41:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: fine_tune_results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tune_results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0821
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 0.5517 |
| No log | 2.0 | 2 | 0.4685 |
| No log | 3.0 | 3 | 0.4025 |
| No log | 4.0 | 4 | 0.3522 |
| No log | 5.0 | 5 | 0.3154 |
| No log | 6.0 | 6 | 0.2895 |
| No log | 7.0 | 7 | 0.2715 |
| No log | 8.0 | 8 | 0.2579 |
| No log | 9.0 | 9 | 0.2464 |
| No log | 10.0 | 10 | 0.2362 |
| No log | 11.0 | 11 | 0.2270 |
| No log | 12.0 | 12 | 0.2188 |
| No log | 13.0 | 13 | 0.2114 |
| No log | 14.0 | 14 | 0.2049 |
| No log | 15.0 | 15 | 0.1988 |
| No log | 16.0 | 16 | 0.1926 |
| No log | 17.0 | 17 | 0.1862 |
| No log | 18.0 | 18 | 0.1793 |
| No log | 19.0 | 19 | 0.1720 |
| No log | 20.0 | 20 | 0.1644 |
| No log | 21.0 | 21 | 0.1565 |
| No log | 22.0 | 22 | 0.1485 |
| No log | 23.0 | 23 | 0.1406 |
| No log | 24.0 | 24 | 0.1330 |
| No log | 25.0 | 25 | 0.1259 |
| No log | 26.0 | 26 | 0.1193 |
| No log | 27.0 | 27 | 0.1133 |
| No log | 28.0 | 28 | 0.1080 |
| No log | 29.0 | 29 | 0.1032 |
| No log | 30.0 | 30 | 0.0991 |
| No log | 31.0 | 31 | 0.0955 |
| No log | 32.0 | 32 | 0.0925 |
| No log | 33.0 | 33 | 0.0900 |
| No log | 34.0 | 34 | 0.0878 |
| No log | 35.0 | 35 | 0.0860 |
| No log | 36.0 | 36 | 0.0847 |
| No log | 37.0 | 37 | 0.0836 |
| No log | 38.0 | 38 | 0.0828 |
| No log | 39.0 | 39 | 0.0823 |
| No log | 40.0 | 40 | 0.0821 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cpu
- Datasets 2.10.1
- Tokenizers 0.13.2
|
bert-large-cased-whole-word-masking
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2,316 | 2023-03-16T16:48:00Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="heziyevv/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ctrl
|
[
"pytorch",
"tf",
"ctrl",
"en",
"arxiv:1909.05858",
"arxiv:1910.09700",
"transformers",
"license:bsd-3-clause",
"has_space"
] | null |
{
"architectures": null,
"model_type": "ctrl",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 17,007 | 2023-03-16T17:03:59Z |
# Vocabulary Trimmed [lmqg/mbart-large-cc25-ruquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-ruquad-qg): `vocabtrimmer/mbart-large-cc25-ruquad-qg-trimmed-ru-15000`
This model is a trimmed version of [lmqg/mbart-large-cc25-ruquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-ruquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mbart-large-cc25-ruquad-qg | vocabtrimmer/mbart-large-cc25-ruquad-qg-trimmed-ru-15000 |
|:---------------------------|:----------------------------------|:-----------------------------------------------------------|
| parameter_size_full | 610,852,864 | 370,188,288 |
| parameter_size_embedding | 512,057,344 | 30,728,192 |
| vocab_size | 250,028 | 15,004 |
| compression_rate_full | 100.0 | 60.6 |
| compression_rate_embedding | 100.0 | 6.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ru | vocabtrimmer/mc4_validation | text | ru | validation | 15000 | 2 |
|
distilbert-base-cased
|
[
"pytorch",
"tf",
"onnx",
"distilbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"transformers",
"license:apache-2.0",
"has_space"
] | null |
{
"architectures": null,
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 574,859 | 2023-03-16T17:13:01Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 277.07 +/- 15.96
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
xlm-mlm-en-2048
|
[
"pytorch",
"tf",
"xlm",
"fill-mask",
"en",
"arxiv:1901.07291",
"arxiv:1911.02116",
"arxiv:1910.09700",
"transformers",
"exbert",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"XLMWithLMHeadModel"
],
"model_type": "xlm",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7,043 | 2023-03-16T17:58:34Z |
VirtualGirl Series
Lora for stable-diffusion-v1-5
No real person photos or copyrighted portraits were used during the training process
<u><b>Declare: VirtualGirl series LoRA was created to avoid the problems of real photos and copyrighted portraits, I don't want this LoRA to be used with models containing real photos or copyrighted portraits.</b></u>
VirtualGirl-Aim(v2 More suitable for upper)
<img src="https://huggingface.co/NyaaCaster/VirtualGirls/resolve/main/VirtualGirl-Aim.preview.png">
-----------------------------
VirtualGirl-Rin(v2 More suitable for upper)
<img src="https://huggingface.co/NyaaCaster/VirtualGirls/resolve/main/VirtualGirl-Rin.preview.png">
-----------------------------
VirtualGirl-Ren
<img src="https://huggingface.co/NyaaCaster/VirtualGirls/resolve/main/VirtualGirl-Ren.preview.png">
-----------------------------
VirtualGirl-Aki
<img src="https://huggingface.co/NyaaCaster/VirtualGirls/resolve/main/VirtualGirl-Aki.preview.png">
-----------------------------
VirtualGirl-Riz
<img src="https://huggingface.co/NyaaCaster/VirtualGirls/resolve/main/00022-2242288971.png">
|
xlm-mlm-xnli15-1024
|
[
"pytorch",
"tf",
"xlm",
"fill-mask",
"multilingual",
"en",
"fr",
"es",
"de",
"el",
"bg",
"ru",
"tr",
"ar",
"vi",
"th",
"zh",
"hi",
"sw",
"ur",
"arxiv:1901.07291",
"arxiv:1910.09700",
"transformers",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"XLMWithLMHeadModel"
],
"model_type": "xlm",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2,050 | null |
Access to model snowfliger/LineCake is restricted and you are not in the authorized list. Visit https://huggingface.co/snowfliger/LineCake to ask for access.
|
AdapterHub/bert-base-uncased-pf-commonsense_qa
|
[
"bert",
"en",
"dataset:commonsense_qa",
"arxiv:2104.08247",
"adapter-transformers",
"adapterhub:comsense/csqa"
] | null |
{
"architectures": null,
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | 2023-03-17T00:42:57Z |
---
license: openrail
---
# Fake news classifier
This repo contains a fine-tuned bert text classification model that detects fake news articles!
## Code
The `Load Model from Hugging Face Hub.ipynb` python notebook contains code that can be used to load the model and perform inference.
|
AdapterHub/bert-base-uncased-pf-pmb_sem_tagging
|
[
"bert",
"en",
"arxiv:2104.08247",
"adapter-transformers",
"token-classification",
"adapterhub:semtag/pmb"
] |
token-classification
|
{
"architectures": null,
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
- hf-asr-leaderboard
metrics:
- wer
model-index:
- name: whisper-medium-arabic-suite-II
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: ar
split: test
args: 'config: ar, split: test'
metrics:
- name: Wer
type: wer
value: 15.6083
datasets:
- mozilla-foundation/common_voice_11_0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-arabic-suite-II
This model is a fine-tuned version of [Seyfelislem/whisper-medium-arabic-suite](https://huggingface.co/Seyfelislem/whisper-medium-arabic-suite) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1897
- Wer: 15.6083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 800
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.088 | 0.67 | 800 | 0.1897 | 15.6083 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.0
- Datasets 2.10.2.dev0
- Tokenizers 0.13.2
|
AdapterHub/bert-base-uncased-pf-yelp_polarity
|
[
"bert",
"en",
"dataset:yelp_polarity",
"arxiv:2104.08247",
"adapter-transformers",
"text-classification"
] |
text-classification
|
{
"architectures": null,
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
license: cc
tags:
- tensorflow
- image-to-image
---
# Whitebox Cartoonizer
Whitebox Cartoonizer [1] model in the `SavedModel` format. The model was exported to the SavedModel format using
[this notebook](https://huggingface.co/sayakpaul/whitebox-cartoonizer/blob/main/export-saved-model.ipynb). Original model
repository can be found [here](https://github.com/SystemErrorWang/White-box-Cartoonization).
<p align="center">
<img src="https://huggingface.co/sayakpaul/whitebox-cartoonizer/resolve/main/output.png"/>
</p>
## Inference code
```py
import cv2
import numpy as np
import requests
import tensorflow as tf
from huggingface_hub import snapshot_download
from PIL import Image
def resize_crop(image):
h, w, c = np.shape(image)
if min(h, w) > 720:
if h > w:
h, w = int(720 * h / w), 720
else:
h, w = 720, int(720 * w / h)
image = cv2.resize(image, (w, h), interpolation=cv2.INTER_AREA)
h, w = (h // 8) * 8, (w // 8) * 8
image = image[:h, :w, :]
return image
def download_image(url):
image = Image.open(requests.get(url, stream=True).raw)
image = image.convert("RGB")
image = np.array(image)
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
return image
def preprocess_image(image):
image = resize_crop(image)
image = image.astype(np.float32) / 127.5 - 1
image = np.expand_dims(image, axis=0)
image = tf.constant(image)
return image
# Load the model and extract concrete function.
model_path = snapshot_download("sayakpaul/whitebox-cartoonizer")
loaded_model = tf.saved_model.load(model_path)
concrete_func = loaded_model.signatures["serving_default"]
# Download and preprocess image.
image_url = "https://huggingface.co/spaces/sayakpaul/cartoonizer-demo-onnx/resolve/main/mountain.jpeg"
image = download_image(image_url)
preprocessed_image = preprocess_image(image)
# Run inference.
result = concrete_func(preprocessed_image)["final_output:0"]
# Post-process the result and serialize it.
output = (result[0].numpy() + 1.0) * 127.5
output = np.clip(output, 0, 255).astype(np.uint8)
output = cv2.cvtColor(output, cv2.COLOR_BGR2RGB)
output_image = Image.fromarray(output)
output_image.save("result.png")
```
## References
[1] Learning to Cartoonize Using White-box Cartoon Representations; Xinrui Wang and Jinze Yu; CVPR 2020.
|
AdapterHub/roberta-base-pf-trec
|
[
"roberta",
"en",
"dataset:trec",
"arxiv:2104.08247",
"adapter-transformers",
"text-classification"
] |
text-classification
|
{
"architectures": null,
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.63 +/- 15.97
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Aftabhussain/Tomato_Leaf_Classifier
|
[
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index",
"autotrain_compatible"
] |
image-classification
|
{
"architectures": [
"ViTForImageClassification"
],
"model_type": "vit",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 50 | 2023-03-17T06:27:50Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.29 +/- 41.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AhmedSSoliman/MarianCG-CoNaLa
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible",
"has_space"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 21 | 2023-03-17T06:59:52Z |
---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit-base-beans
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9849624060150376
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0875
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 0.2574 | 1.0 | 130 | 0.9624 | 0.2307 |
| 0.2785 | 2.0 | 260 | 0.9925 | 0.1109 |
| 0.1496 | 3.0 | 390 | 0.9699 | 0.1109 |
| 0.0916 | 4.0 | 520 | 0.9850 | 0.0875 |
| 0.1489 | 5.0 | 650 | 0.9774 | 0.0886 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Akash7897/fill_mask_model
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-03-17T07:44:36Z |
---
tags:
- generated_from_trainer
model-index:
- name: layoutlm-synth2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-synth2
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0270
- Ank Address: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20}
- Ank Name: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20}
- Ayee Address: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20}
- Ayee Name: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20}
- Icr: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20}
- Mount: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20}
- Overall Precision: 1.0
- Overall Recall: 1.0
- Overall F1: 1.0
- Overall Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Ank Address | Ank Name | Ayee Address | Ayee Name | Icr | Mount | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------:|:-------------------------------------------------------------------------:|:----------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.4218 | 1.0 | 10 | 0.9682 | {'precision': 0.03225806451612903, 'recall': 0.1, 'f1': 0.04878048780487805, 'number': 20} | {'precision': 0.3333333333333333, 'recall': 0.05, 'f1': 0.08695652173913045, 'number': 20} | {'precision': 0.03125, 'recall': 0.1, 'f1': 0.047619047619047616, 'number': 20} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 20} | {'precision': 1.0, 'recall': 0.7, 'f1': 0.8235294117647058, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | 0.2393 | 0.325 | 0.2756 | 0.5811 |
| 0.7362 | 2.0 | 20 | 0.3668 | {'precision': 0.8636363636363636, 'recall': 0.95, 'f1': 0.9047619047619048, 'number': 20} | {'precision': 0.9090909090909091, 'recall': 1.0, 'f1': 0.9523809523809523, 'number': 20} | {'precision': 0.8571428571428571, 'recall': 0.9, 'f1': 0.8780487804878048, 'number': 20} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | 0.904 | 0.9417 | 0.9224 | 0.9855 |
| 0.2488 | 3.0 | 30 | 0.0892 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0877 | 4.0 | 40 | 0.0373 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0491 | 5.0 | 50 | 0.0270 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.27.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Akash7897/gpt2-wikitext2
|
[
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
# Vocabulary Trimmed [lmqg/mbart-large-cc25-itquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-itquad-qg): `vocabtrimmer/mbart-large-cc25-itquad-qg-trimmed-it-60000`
This model is a trimmed version of [lmqg/mbart-large-cc25-itquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-itquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mbart-large-cc25-itquad-qg | vocabtrimmer/mbart-large-cc25-itquad-qg-trimmed-it-60000 |
|:---------------------------|:----------------------------------|:-----------------------------------------------------------|
| parameter_size_full | 610,852,864 | 416,267,264 |
| parameter_size_embedding | 512,057,344 | 122,886,144 |
| vocab_size | 250,028 | 60,003 |
| compression_rate_full | 100.0 | 68.15 |
| compression_rate_embedding | 100.0 | 24.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| it | vocabtrimmer/mc4_validation | text | it | validation | 60000 | 2 |
|
Akash7897/test-clm
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-03-17T07:49:09Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: 2ndFrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Shivraj8615/2ndFrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Akashpb13/Swahili_xlsr
|
[
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sw",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Christian90/SoccerTwos1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Akashpb13/xlsr_hungarian_new
|
[
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hu",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | 2023-03-17T08:07:55Z |
---
tags:
- adapter-transformers
- roberta
datasets:
- glue
---
# Adapter `WillHeld/pfadapter-roberta-base-tada-ot` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [glue](https://huggingface.co/datasets/glue/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("WillHeld/pfadapter-roberta-base-tada-ot", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
Akbarariza/Anjar
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
inference: true
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
---
# Openjourney is an open source Stable Diffusion fine tuned model on Midjourney images, by [PromptHero](https://prompthero.com/poolsuite-diffusion-prompts?utm_source=huggingface&utm_medium=referral)
Include **'mdjrny-v4 style'** in prompt. Here you'll find hundreds of [Openjourney prompts](https://prompthero.com/openjourney-prompts?utm_source=huggingface&utm_medium=referral)
# Openjourney Links
- [Lora version](https://huggingface.co/prompthero/openjourney-lora)
- [Openjourney v2](https://huggingface.co/prompthero/openjourney-v2)
# Want to learn AI art generation?:
- [Crash course in AI art generation](https://prompthero.com/academy/prompt-engineering-course?utm_source=huggingface&utm_medium=referral)
- [Learn to fine-tune Stable Diffusion for photorealism](https://prompthero.com/academy/dreambooth-stable-diffusion-train-fine-tune-course?utm_source=huggingface&utm_medium=referral)
# Use it for free:
[](https://huggingface.co/spaces/akhaliq/midjourney-v4-diffusion)
### Stable Diffusion v1.5 vs Openjourney
(Same parameters, just added "mdjrny-v4 style" at the beginning):
<img src="https://s3.amazonaws.com/moonup/production/uploads/1667904587642-63265d019f9d19bfd4f45031.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1667904587623-63265d019f9d19bfd4f45031.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1667904587609-63265d019f9d19bfd4f45031.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1667904587646-63265d019f9d19bfd4f45031.png" width="100%"/>
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "prompthero/openjourney"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "retro serie of different cars with different colors and shapes, mdjrny-v4 style"
image = pipe(prompt).images[0]
image.save("./retro_cars.png")
```
|
Akiva/Joke
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-03-17T08:21:36Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 31.50 +/- 20.65
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Aklily/Lilys
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
# Vocabulary Trimmed [lmqg/mbart-large-cc25-ruquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-ruquad-qg): `vocabtrimmer/mbart-large-cc25-ruquad-qg-trimmed-ru-90000`
This model is a trimmed version of [lmqg/mbart-large-cc25-ruquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-ruquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mbart-large-cc25-ruquad-qg | vocabtrimmer/mbart-large-cc25-ruquad-qg-trimmed-ru-90000 |
|:---------------------------|:----------------------------------|:-----------------------------------------------------------|
| parameter_size_full | 610,852,864 | 446,987,264 |
| parameter_size_embedding | 512,057,344 | 184,326,144 |
| vocab_size | 250,028 | 90,003 |
| compression_rate_full | 100.0 | 73.17 |
| compression_rate_embedding | 100.0 | 36.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ru | vocabtrimmer/mc4_validation | text | ru | validation | 90000 | 2 |
|
Aleksandar/distilbert-srb-ner-setimes
|
[
"pytorch",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: clinico-finetuned-augmented1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clinico-finetuned-augmented1
This model is a fine-tuned version of [joheras/distilbert-base-spanish-uncased-finetuned-clinais](https://huggingface.co/joheras/distilbert-base-spanish-uncased-finetuned-clinais) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2935
- Precision: 0.3471
- Recall: 0.55
- F1: 0.4256
- Accuracy: 0.8319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 23 | 1.3471 | 0.0013 | 0.0022 | 0.0016 | 0.5548 |
| No log | 2.0 | 46 | 1.0386 | 0.0120 | 0.0344 | 0.0179 | 0.6527 |
| No log | 3.0 | 69 | 0.9088 | 0.0410 | 0.0933 | 0.0569 | 0.7125 |
| No log | 4.0 | 92 | 0.8189 | 0.0604 | 0.1356 | 0.0835 | 0.7446 |
| No log | 5.0 | 115 | 0.7484 | 0.0943 | 0.1844 | 0.1248 | 0.7704 |
| No log | 6.0 | 138 | 0.7094 | 0.1132 | 0.2178 | 0.1489 | 0.7772 |
| No log | 7.0 | 161 | 0.7022 | 0.1169 | 0.2211 | 0.1530 | 0.7800 |
| No log | 8.0 | 184 | 0.6776 | 0.1575 | 0.2878 | 0.2036 | 0.7944 |
| No log | 9.0 | 207 | 0.6983 | 0.1493 | 0.2911 | 0.1974 | 0.7964 |
| No log | 10.0 | 230 | 0.6821 | 0.1772 | 0.33 | 0.2306 | 0.8021 |
| No log | 11.0 | 253 | 0.6845 | 0.1861 | 0.3511 | 0.2433 | 0.8053 |
| No log | 12.0 | 276 | 0.6874 | 0.2020 | 0.3756 | 0.2627 | 0.8106 |
| No log | 13.0 | 299 | 0.6973 | 0.2098 | 0.3967 | 0.2744 | 0.8102 |
| No log | 14.0 | 322 | 0.7230 | 0.2140 | 0.41 | 0.2812 | 0.8140 |
| No log | 15.0 | 345 | 0.7448 | 0.2243 | 0.4244 | 0.2935 | 0.8100 |
| No log | 16.0 | 368 | 0.7519 | 0.2378 | 0.4278 | 0.3057 | 0.8111 |
| No log | 17.0 | 391 | 0.7503 | 0.2326 | 0.4378 | 0.3038 | 0.8119 |
| No log | 18.0 | 414 | 0.7573 | 0.2494 | 0.4456 | 0.3198 | 0.8139 |
| No log | 19.0 | 437 | 0.7970 | 0.2435 | 0.4544 | 0.3171 | 0.8137 |
| No log | 20.0 | 460 | 0.7990 | 0.2523 | 0.4511 | 0.3236 | 0.8170 |
| No log | 21.0 | 483 | 0.8115 | 0.2484 | 0.4633 | 0.3234 | 0.8139 |
| 0.4675 | 22.0 | 506 | 0.8135 | 0.2572 | 0.4667 | 0.3316 | 0.8157 |
| 0.4675 | 23.0 | 529 | 0.8509 | 0.2644 | 0.4656 | 0.3372 | 0.8108 |
| 0.4675 | 24.0 | 552 | 0.8450 | 0.2600 | 0.4789 | 0.3370 | 0.8143 |
| 0.4675 | 25.0 | 575 | 0.8667 | 0.2777 | 0.4811 | 0.3522 | 0.8220 |
| 0.4675 | 26.0 | 598 | 0.8639 | 0.2634 | 0.4767 | 0.3393 | 0.8152 |
| 0.4675 | 27.0 | 621 | 0.8903 | 0.2790 | 0.4811 | 0.3532 | 0.8154 |
| 0.4675 | 28.0 | 644 | 0.8697 | 0.2885 | 0.4978 | 0.3653 | 0.8212 |
| 0.4675 | 29.0 | 667 | 0.8856 | 0.2851 | 0.4856 | 0.3592 | 0.8197 |
| 0.4675 | 30.0 | 690 | 0.9332 | 0.2670 | 0.4933 | 0.3465 | 0.8141 |
| 0.4675 | 31.0 | 713 | 0.9064 | 0.2929 | 0.5078 | 0.3715 | 0.8201 |
| 0.4675 | 32.0 | 736 | 0.9256 | 0.2915 | 0.5089 | 0.3707 | 0.8222 |
| 0.4675 | 33.0 | 759 | 0.9447 | 0.2959 | 0.51 | 0.3745 | 0.8175 |
| 0.4675 | 34.0 | 782 | 0.9387 | 0.2998 | 0.5133 | 0.3785 | 0.8199 |
| 0.4675 | 35.0 | 805 | 0.9545 | 0.3124 | 0.52 | 0.3903 | 0.8225 |
| 0.4675 | 36.0 | 828 | 0.9785 | 0.2962 | 0.5056 | 0.3736 | 0.8179 |
| 0.4675 | 37.0 | 851 | 0.9635 | 0.3004 | 0.5133 | 0.3790 | 0.8201 |
| 0.4675 | 38.0 | 874 | 0.9749 | 0.2810 | 0.5011 | 0.3601 | 0.8229 |
| 0.4675 | 39.0 | 897 | 0.9753 | 0.3018 | 0.5289 | 0.3843 | 0.8201 |
| 0.4675 | 40.0 | 920 | 0.9735 | 0.3001 | 0.5122 | 0.3785 | 0.8221 |
| 0.4675 | 41.0 | 943 | 1.0196 | 0.3033 | 0.5078 | 0.3797 | 0.8179 |
| 0.4675 | 42.0 | 966 | 0.9775 | 0.3037 | 0.5267 | 0.3852 | 0.8254 |
| 0.4675 | 43.0 | 989 | 1.0051 | 0.3028 | 0.5133 | 0.3809 | 0.8221 |
| 0.0384 | 44.0 | 1012 | 1.0053 | 0.3067 | 0.5256 | 0.3874 | 0.8250 |
| 0.0384 | 45.0 | 1035 | 1.0145 | 0.3090 | 0.5356 | 0.3919 | 0.8225 |
| 0.0384 | 46.0 | 1058 | 1.0299 | 0.3081 | 0.5244 | 0.3882 | 0.8212 |
| 0.0384 | 47.0 | 1081 | 1.0265 | 0.3231 | 0.5256 | 0.4002 | 0.8239 |
| 0.0384 | 48.0 | 1104 | 1.0151 | 0.3154 | 0.5278 | 0.3948 | 0.8248 |
| 0.0384 | 49.0 | 1127 | 1.0384 | 0.3229 | 0.5378 | 0.4035 | 0.8234 |
| 0.0384 | 50.0 | 1150 | 1.0641 | 0.3159 | 0.5311 | 0.3962 | 0.8207 |
| 0.0384 | 51.0 | 1173 | 1.0592 | 0.3123 | 0.5233 | 0.3912 | 0.8218 |
| 0.0384 | 52.0 | 1196 | 1.0314 | 0.3131 | 0.5378 | 0.3957 | 0.8238 |
| 0.0384 | 53.0 | 1219 | 1.0466 | 0.3169 | 0.5344 | 0.3978 | 0.8235 |
| 0.0384 | 54.0 | 1242 | 1.0402 | 0.3005 | 0.5156 | 0.3797 | 0.8229 |
| 0.0384 | 55.0 | 1265 | 1.0384 | 0.3149 | 0.5311 | 0.3954 | 0.8239 |
| 0.0384 | 56.0 | 1288 | 1.0401 | 0.3282 | 0.5456 | 0.4098 | 0.8234 |
| 0.0384 | 57.0 | 1311 | 1.0584 | 0.3221 | 0.5322 | 0.4013 | 0.8232 |
| 0.0384 | 58.0 | 1334 | 1.0665 | 0.3308 | 0.5411 | 0.4106 | 0.8256 |
| 0.0384 | 59.0 | 1357 | 1.0774 | 0.3201 | 0.5267 | 0.3982 | 0.8228 |
| 0.0384 | 60.0 | 1380 | 1.0800 | 0.3144 | 0.5344 | 0.3959 | 0.8201 |
| 0.0384 | 61.0 | 1403 | 1.1017 | 0.3071 | 0.52 | 0.3861 | 0.8197 |
| 0.0384 | 62.0 | 1426 | 1.1059 | 0.3218 | 0.5289 | 0.4002 | 0.8192 |
| 0.0384 | 63.0 | 1449 | 1.0926 | 0.3199 | 0.5367 | 0.4008 | 0.8216 |
| 0.0384 | 64.0 | 1472 | 1.0825 | 0.3135 | 0.5267 | 0.3930 | 0.8262 |
| 0.0384 | 65.0 | 1495 | 1.1075 | 0.3190 | 0.5344 | 0.3995 | 0.8213 |
| 0.0109 | 66.0 | 1518 | 1.1220 | 0.3174 | 0.5233 | 0.3951 | 0.8201 |
| 0.0109 | 67.0 | 1541 | 1.1047 | 0.3233 | 0.5367 | 0.4035 | 0.8221 |
| 0.0109 | 68.0 | 1564 | 1.1506 | 0.3203 | 0.5367 | 0.4012 | 0.8226 |
| 0.0109 | 69.0 | 1587 | 1.1263 | 0.3188 | 0.5289 | 0.3978 | 0.8229 |
| 0.0109 | 70.0 | 1610 | 1.1126 | 0.3150 | 0.5344 | 0.3964 | 0.8217 |
| 0.0109 | 71.0 | 1633 | 1.1014 | 0.3210 | 0.5389 | 0.4023 | 0.8239 |
| 0.0109 | 72.0 | 1656 | 1.1620 | 0.3213 | 0.5444 | 0.4041 | 0.8206 |
| 0.0109 | 73.0 | 1679 | 1.1406 | 0.3400 | 0.55 | 0.4202 | 0.8229 |
| 0.0109 | 74.0 | 1702 | 1.1173 | 0.3276 | 0.5467 | 0.4097 | 0.8224 |
| 0.0109 | 75.0 | 1725 | 1.1185 | 0.3274 | 0.5533 | 0.4114 | 0.8256 |
| 0.0109 | 76.0 | 1748 | 1.1058 | 0.3215 | 0.5422 | 0.4036 | 0.8235 |
| 0.0109 | 77.0 | 1771 | 1.1212 | 0.3250 | 0.5356 | 0.4045 | 0.8269 |
| 0.0109 | 78.0 | 1794 | 1.1542 | 0.3185 | 0.5267 | 0.3970 | 0.8237 |
| 0.0109 | 79.0 | 1817 | 1.1348 | 0.3299 | 0.54 | 0.4096 | 0.8237 |
| 0.0109 | 80.0 | 1840 | 1.1355 | 0.3282 | 0.5411 | 0.4086 | 0.8244 |
| 0.0109 | 81.0 | 1863 | 1.1255 | 0.3172 | 0.5456 | 0.4011 | 0.8238 |
| 0.0109 | 82.0 | 1886 | 1.1328 | 0.3322 | 0.5467 | 0.4133 | 0.8232 |
| 0.0109 | 83.0 | 1909 | 1.1444 | 0.3359 | 0.54 | 0.4141 | 0.8232 |
| 0.0109 | 84.0 | 1932 | 1.1474 | 0.3395 | 0.5467 | 0.4189 | 0.8231 |
| 0.0109 | 85.0 | 1955 | 1.1526 | 0.3336 | 0.5411 | 0.4127 | 0.8225 |
| 0.0109 | 86.0 | 1978 | 1.1408 | 0.3327 | 0.5411 | 0.4120 | 0.8262 |
| 0.0055 | 87.0 | 2001 | 1.1414 | 0.3269 | 0.5433 | 0.4082 | 0.8232 |
| 0.0055 | 88.0 | 2024 | 1.1626 | 0.3354 | 0.54 | 0.4138 | 0.8230 |
| 0.0055 | 89.0 | 2047 | 1.1622 | 0.3394 | 0.5378 | 0.4162 | 0.8251 |
| 0.0055 | 90.0 | 2070 | 1.1423 | 0.3251 | 0.5244 | 0.4014 | 0.8264 |
| 0.0055 | 91.0 | 2093 | 1.1629 | 0.3290 | 0.5322 | 0.4066 | 0.8252 |
| 0.0055 | 92.0 | 2116 | 1.1478 | 0.3372 | 0.5544 | 0.4193 | 0.8238 |
| 0.0055 | 93.0 | 2139 | 1.1847 | 0.3280 | 0.5267 | 0.4043 | 0.8252 |
| 0.0055 | 94.0 | 2162 | 1.1801 | 0.3447 | 0.5411 | 0.4211 | 0.8204 |
| 0.0055 | 95.0 | 2185 | 1.1526 | 0.3297 | 0.5378 | 0.4088 | 0.8258 |
| 0.0055 | 96.0 | 2208 | 1.1786 | 0.3475 | 0.5433 | 0.4239 | 0.8230 |
| 0.0055 | 97.0 | 2231 | 1.1672 | 0.3347 | 0.5411 | 0.4136 | 0.8247 |
| 0.0055 | 98.0 | 2254 | 1.2031 | 0.3230 | 0.5422 | 0.4048 | 0.8214 |
| 0.0055 | 99.0 | 2277 | 1.1589 | 0.3347 | 0.5478 | 0.4155 | 0.8280 |
| 0.0055 | 100.0 | 2300 | 1.2049 | 0.3276 | 0.5533 | 0.4116 | 0.8214 |
| 0.0055 | 101.0 | 2323 | 1.1695 | 0.3304 | 0.5422 | 0.4106 | 0.8248 |
| 0.0055 | 102.0 | 2346 | 1.1933 | 0.3406 | 0.5522 | 0.4214 | 0.8218 |
| 0.0055 | 103.0 | 2369 | 1.1821 | 0.3351 | 0.5544 | 0.4177 | 0.8231 |
| 0.0055 | 104.0 | 2392 | 1.2068 | 0.3369 | 0.5311 | 0.4122 | 0.8207 |
| 0.0055 | 105.0 | 2415 | 1.1866 | 0.3384 | 0.5478 | 0.4183 | 0.8214 |
| 0.0055 | 106.0 | 2438 | 1.1917 | 0.3347 | 0.5544 | 0.4174 | 0.8243 |
| 0.0055 | 107.0 | 2461 | 1.2080 | 0.3304 | 0.5456 | 0.4116 | 0.8232 |
| 0.0055 | 108.0 | 2484 | 1.1625 | 0.3351 | 0.5544 | 0.4177 | 0.8258 |
| 0.0034 | 109.0 | 2507 | 1.2035 | 0.3251 | 0.5378 | 0.4052 | 0.8238 |
| 0.0034 | 110.0 | 2530 | 1.1610 | 0.3340 | 0.5456 | 0.4143 | 0.8276 |
| 0.0034 | 111.0 | 2553 | 1.1941 | 0.3522 | 0.5533 | 0.4304 | 0.8267 |
| 0.0034 | 112.0 | 2576 | 1.1784 | 0.3424 | 0.5467 | 0.4211 | 0.8284 |
| 0.0034 | 113.0 | 2599 | 1.1831 | 0.3351 | 0.5478 | 0.4159 | 0.8301 |
| 0.0034 | 114.0 | 2622 | 1.1999 | 0.3368 | 0.5467 | 0.4168 | 0.8265 |
| 0.0034 | 115.0 | 2645 | 1.1896 | 0.3407 | 0.5467 | 0.4198 | 0.8277 |
| 0.0034 | 116.0 | 2668 | 1.1865 | 0.3451 | 0.5433 | 0.4221 | 0.8291 |
| 0.0034 | 117.0 | 2691 | 1.2529 | 0.3498 | 0.55 | 0.4276 | 0.8228 |
| 0.0034 | 118.0 | 2714 | 1.2159 | 0.3356 | 0.5478 | 0.4162 | 0.8267 |
| 0.0034 | 119.0 | 2737 | 1.1932 | 0.3401 | 0.5556 | 0.4219 | 0.8283 |
| 0.0034 | 120.0 | 2760 | 1.1842 | 0.3419 | 0.5467 | 0.4207 | 0.8287 |
| 0.0034 | 121.0 | 2783 | 1.2179 | 0.3407 | 0.5456 | 0.4195 | 0.8270 |
| 0.0034 | 122.0 | 2806 | 1.2617 | 0.3322 | 0.5489 | 0.4139 | 0.8233 |
| 0.0034 | 123.0 | 2829 | 1.1982 | 0.3356 | 0.5544 | 0.4181 | 0.8237 |
| 0.0034 | 124.0 | 2852 | 1.1664 | 0.3379 | 0.5533 | 0.4195 | 0.8297 |
| 0.0034 | 125.0 | 2875 | 1.1780 | 0.3599 | 0.5578 | 0.4375 | 0.8299 |
| 0.0034 | 126.0 | 2898 | 1.2004 | 0.3515 | 0.5578 | 0.4313 | 0.8310 |
| 0.0034 | 127.0 | 2921 | 1.1824 | 0.3405 | 0.5478 | 0.4199 | 0.8325 |
| 0.0034 | 128.0 | 2944 | 1.1934 | 0.3390 | 0.5533 | 0.4204 | 0.8277 |
| 0.0034 | 129.0 | 2967 | 1.1975 | 0.3409 | 0.5511 | 0.4212 | 0.8288 |
| 0.0034 | 130.0 | 2990 | 1.1867 | 0.3313 | 0.5567 | 0.4154 | 0.8266 |
| 0.0025 | 131.0 | 3013 | 1.2359 | 0.3336 | 0.5567 | 0.4172 | 0.8191 |
| 0.0025 | 132.0 | 3036 | 1.2057 | 0.3302 | 0.5544 | 0.4139 | 0.8263 |
| 0.0025 | 133.0 | 3059 | 1.1999 | 0.3480 | 0.5544 | 0.4276 | 0.8304 |
| 0.0025 | 134.0 | 3082 | 1.2098 | 0.3230 | 0.5544 | 0.4082 | 0.8246 |
| 0.0025 | 135.0 | 3105 | 1.2305 | 0.3255 | 0.54 | 0.4062 | 0.8227 |
| 0.0025 | 136.0 | 3128 | 1.2238 | 0.3290 | 0.5356 | 0.4076 | 0.8255 |
| 0.0025 | 137.0 | 3151 | 1.2014 | 0.3530 | 0.5456 | 0.4286 | 0.8258 |
| 0.0025 | 138.0 | 3174 | 1.2373 | 0.3536 | 0.5611 | 0.4338 | 0.8248 |
| 0.0025 | 139.0 | 3197 | 1.2234 | 0.3333 | 0.5544 | 0.4164 | 0.8256 |
| 0.0025 | 140.0 | 3220 | 1.2205 | 0.3345 | 0.5433 | 0.4141 | 0.8278 |
| 0.0025 | 141.0 | 3243 | 1.2107 | 0.3406 | 0.5522 | 0.4214 | 0.8275 |
| 0.0025 | 142.0 | 3266 | 1.2316 | 0.3467 | 0.5478 | 0.4246 | 0.8233 |
| 0.0025 | 143.0 | 3289 | 1.2420 | 0.3431 | 0.5489 | 0.4222 | 0.8232 |
| 0.0025 | 144.0 | 3312 | 1.2324 | 0.3430 | 0.5533 | 0.4235 | 0.8247 |
| 0.0025 | 145.0 | 3335 | 1.2120 | 0.3391 | 0.5489 | 0.4192 | 0.8274 |
| 0.0025 | 146.0 | 3358 | 1.2396 | 0.3520 | 0.5589 | 0.4319 | 0.8249 |
| 0.0025 | 147.0 | 3381 | 1.2337 | 0.3502 | 0.5622 | 0.4316 | 0.8252 |
| 0.0025 | 148.0 | 3404 | 1.2046 | 0.3436 | 0.5556 | 0.4246 | 0.8305 |
| 0.0025 | 149.0 | 3427 | 1.2149 | 0.3406 | 0.5589 | 0.4232 | 0.8262 |
| 0.0025 | 150.0 | 3450 | 1.2147 | 0.3469 | 0.5589 | 0.4281 | 0.8276 |
| 0.0025 | 151.0 | 3473 | 1.2293 | 0.3486 | 0.5578 | 0.4291 | 0.8271 |
| 0.0025 | 152.0 | 3496 | 1.2431 | 0.3596 | 0.5622 | 0.4387 | 0.8274 |
| 0.0019 | 153.0 | 3519 | 1.2356 | 0.3603 | 0.5533 | 0.4365 | 0.8270 |
| 0.0019 | 154.0 | 3542 | 1.2459 | 0.3600 | 0.5656 | 0.4399 | 0.8266 |
| 0.0019 | 155.0 | 3565 | 1.2203 | 0.3524 | 0.5556 | 0.4312 | 0.8312 |
| 0.0019 | 156.0 | 3588 | 1.2337 | 0.3653 | 0.5633 | 0.4432 | 0.8304 |
| 0.0019 | 157.0 | 3611 | 1.2318 | 0.3572 | 0.5644 | 0.4376 | 0.8290 |
| 0.0019 | 158.0 | 3634 | 1.2456 | 0.3542 | 0.5589 | 0.4336 | 0.8270 |
| 0.0019 | 159.0 | 3657 | 1.2533 | 0.3474 | 0.5567 | 0.4278 | 0.8252 |
| 0.0019 | 160.0 | 3680 | 1.2369 | 0.3522 | 0.5544 | 0.4307 | 0.8305 |
| 0.0019 | 161.0 | 3703 | 1.2516 | 0.3645 | 0.5589 | 0.4412 | 0.8292 |
| 0.0019 | 162.0 | 3726 | 1.2369 | 0.3448 | 0.5567 | 0.4258 | 0.8296 |
| 0.0019 | 163.0 | 3749 | 1.2350 | 0.3544 | 0.56 | 0.4341 | 0.8316 |
| 0.0019 | 164.0 | 3772 | 1.2407 | 0.3493 | 0.5589 | 0.4299 | 0.8280 |
| 0.0019 | 165.0 | 3795 | 1.2450 | 0.3431 | 0.5578 | 0.4249 | 0.8287 |
| 0.0019 | 166.0 | 3818 | 1.2396 | 0.3593 | 0.5633 | 0.4388 | 0.8312 |
| 0.0019 | 167.0 | 3841 | 1.2487 | 0.3568 | 0.5689 | 0.4385 | 0.8283 |
| 0.0019 | 168.0 | 3864 | 1.2995 | 0.3504 | 0.5622 | 0.4317 | 0.8231 |
| 0.0019 | 169.0 | 3887 | 1.2728 | 0.3502 | 0.57 | 0.4338 | 0.8261 |
| 0.0019 | 170.0 | 3910 | 1.2672 | 0.3605 | 0.5644 | 0.4400 | 0.8263 |
| 0.0019 | 171.0 | 3933 | 1.2394 | 0.3580 | 0.5489 | 0.4333 | 0.8301 |
| 0.0019 | 172.0 | 3956 | 1.2566 | 0.3539 | 0.5556 | 0.4323 | 0.8279 |
| 0.0019 | 173.0 | 3979 | 1.2587 | 0.3514 | 0.5533 | 0.4299 | 0.8281 |
| 0.0013 | 174.0 | 4002 | 1.2540 | 0.3489 | 0.5467 | 0.4260 | 0.8307 |
| 0.0013 | 175.0 | 4025 | 1.2817 | 0.3495 | 0.5511 | 0.4278 | 0.8231 |
| 0.0013 | 176.0 | 4048 | 1.2569 | 0.3543 | 0.5578 | 0.4333 | 0.8275 |
| 0.0013 | 177.0 | 4071 | 1.3119 | 0.3562 | 0.5544 | 0.4337 | 0.8245 |
| 0.0013 | 178.0 | 4094 | 1.3102 | 0.3423 | 0.5522 | 0.4226 | 0.8213 |
| 0.0013 | 179.0 | 4117 | 1.2313 | 0.3407 | 0.5489 | 0.4204 | 0.8320 |
| 0.0013 | 180.0 | 4140 | 1.2375 | 0.3506 | 0.5567 | 0.4302 | 0.8272 |
| 0.0013 | 181.0 | 4163 | 1.2360 | 0.3409 | 0.5489 | 0.4206 | 0.8289 |
| 0.0013 | 182.0 | 4186 | 1.2685 | 0.3465 | 0.5544 | 0.4265 | 0.8281 |
| 0.0013 | 183.0 | 4209 | 1.3033 | 0.3510 | 0.5511 | 0.4289 | 0.8238 |
| 0.0013 | 184.0 | 4232 | 1.2407 | 0.3626 | 0.5511 | 0.4374 | 0.8319 |
| 0.0013 | 185.0 | 4255 | 1.2542 | 0.3605 | 0.5644 | 0.4400 | 0.8284 |
| 0.0013 | 186.0 | 4278 | 1.2654 | 0.3637 | 0.5633 | 0.4420 | 0.8278 |
| 0.0013 | 187.0 | 4301 | 1.2667 | 0.3516 | 0.5644 | 0.4333 | 0.8280 |
| 0.0013 | 188.0 | 4324 | 1.2661 | 0.3542 | 0.5667 | 0.4359 | 0.8293 |
| 0.0013 | 189.0 | 4347 | 1.2831 | 0.3524 | 0.5533 | 0.4306 | 0.8257 |
| 0.0013 | 190.0 | 4370 | 1.2593 | 0.3603 | 0.55 | 0.4354 | 0.8288 |
| 0.0013 | 191.0 | 4393 | 1.2186 | 0.3501 | 0.5711 | 0.4341 | 0.8284 |
| 0.0013 | 192.0 | 4416 | 1.2797 | 0.3576 | 0.5622 | 0.4371 | 0.8222 |
| 0.0013 | 193.0 | 4439 | 1.2491 | 0.3501 | 0.5656 | 0.4325 | 0.8264 |
| 0.0013 | 194.0 | 4462 | 1.2456 | 0.3580 | 0.5544 | 0.4350 | 0.8305 |
| 0.0013 | 195.0 | 4485 | 1.2567 | 0.3542 | 0.5478 | 0.4302 | 0.8275 |
| 0.0014 | 196.0 | 4508 | 1.2551 | 0.3512 | 0.5611 | 0.4320 | 0.8274 |
| 0.0014 | 197.0 | 4531 | 1.2512 | 0.3468 | 0.5611 | 0.4287 | 0.8295 |
| 0.0014 | 198.0 | 4554 | 1.2607 | 0.3465 | 0.5556 | 0.4268 | 0.8279 |
| 0.0014 | 199.0 | 4577 | 1.2703 | 0.3501 | 0.5556 | 0.4296 | 0.8264 |
| 0.0014 | 200.0 | 4600 | 1.2536 | 0.3421 | 0.5511 | 0.4221 | 0.8289 |
| 0.0014 | 201.0 | 4623 | 1.2978 | 0.3302 | 0.5467 | 0.4117 | 0.8242 |
| 0.0014 | 202.0 | 4646 | 1.2775 | 0.3495 | 0.5456 | 0.4260 | 0.8279 |
| 0.0014 | 203.0 | 4669 | 1.2697 | 0.3467 | 0.5556 | 0.4270 | 0.8274 |
| 0.0014 | 204.0 | 4692 | 1.2595 | 0.3525 | 0.5511 | 0.4300 | 0.8295 |
| 0.0014 | 205.0 | 4715 | 1.2565 | 0.3479 | 0.5578 | 0.4285 | 0.8291 |
| 0.0014 | 206.0 | 4738 | 1.2547 | 0.3592 | 0.5611 | 0.4380 | 0.8316 |
| 0.0014 | 207.0 | 4761 | 1.2585 | 0.3537 | 0.5533 | 0.4315 | 0.8303 |
| 0.0014 | 208.0 | 4784 | 1.2788 | 0.3543 | 0.5622 | 0.4347 | 0.8295 |
| 0.0014 | 209.0 | 4807 | 1.2566 | 0.3655 | 0.5556 | 0.4409 | 0.8327 |
| 0.0014 | 210.0 | 4830 | 1.2612 | 0.3655 | 0.5633 | 0.4434 | 0.8326 |
| 0.0014 | 211.0 | 4853 | 1.2631 | 0.3532 | 0.56 | 0.4332 | 0.8321 |
| 0.0014 | 212.0 | 4876 | 1.2867 | 0.3563 | 0.5467 | 0.4314 | 0.8304 |
| 0.0014 | 213.0 | 4899 | 1.2959 | 0.3417 | 0.5444 | 0.4199 | 0.8272 |
| 0.0014 | 214.0 | 4922 | 1.2547 | 0.3577 | 0.5556 | 0.4352 | 0.8339 |
| 0.0014 | 215.0 | 4945 | 1.2812 | 0.3491 | 0.5567 | 0.4291 | 0.8284 |
| 0.0014 | 216.0 | 4968 | 1.2780 | 0.3455 | 0.5489 | 0.4240 | 0.8313 |
| 0.0014 | 217.0 | 4991 | 1.2361 | 0.3500 | 0.5511 | 0.4281 | 0.8349 |
| 0.0011 | 218.0 | 5014 | 1.2627 | 0.3324 | 0.5433 | 0.4125 | 0.8304 |
| 0.0011 | 219.0 | 5037 | 1.2884 | 0.3547 | 0.5544 | 0.4326 | 0.8282 |
| 0.0011 | 220.0 | 5060 | 1.2828 | 0.3320 | 0.5456 | 0.4128 | 0.8270 |
| 0.0011 | 221.0 | 5083 | 1.3193 | 0.3527 | 0.5467 | 0.4288 | 0.8259 |
| 0.0011 | 222.0 | 5106 | 1.3023 | 0.3555 | 0.5522 | 0.4326 | 0.8246 |
| 0.0011 | 223.0 | 5129 | 1.2836 | 0.3354 | 0.5511 | 0.4170 | 0.8305 |
| 0.0011 | 224.0 | 5152 | 1.3094 | 0.3469 | 0.5511 | 0.4258 | 0.8271 |
| 0.0011 | 225.0 | 5175 | 1.2760 | 0.3538 | 0.5511 | 0.4309 | 0.8338 |
| 0.0011 | 226.0 | 5198 | 1.2786 | 0.3494 | 0.5567 | 0.4293 | 0.8336 |
| 0.0011 | 227.0 | 5221 | 1.2766 | 0.3564 | 0.5556 | 0.4342 | 0.8340 |
| 0.0011 | 228.0 | 5244 | 1.2738 | 0.3555 | 0.5533 | 0.4329 | 0.8336 |
| 0.0011 | 229.0 | 5267 | 1.2871 | 0.3538 | 0.5578 | 0.4329 | 0.8292 |
| 0.0011 | 230.0 | 5290 | 1.2836 | 0.3568 | 0.5578 | 0.4352 | 0.8321 |
| 0.0011 | 231.0 | 5313 | 1.2787 | 0.3534 | 0.5533 | 0.4314 | 0.8300 |
| 0.0011 | 232.0 | 5336 | 1.2907 | 0.3538 | 0.5578 | 0.4329 | 0.8327 |
| 0.0011 | 233.0 | 5359 | 1.2812 | 0.3544 | 0.5556 | 0.4327 | 0.8316 |
| 0.0011 | 234.0 | 5382 | 1.2802 | 0.3543 | 0.5567 | 0.4330 | 0.8303 |
| 0.0011 | 235.0 | 5405 | 1.3033 | 0.3484 | 0.5567 | 0.4286 | 0.8284 |
| 0.0011 | 236.0 | 5428 | 1.2852 | 0.3491 | 0.5578 | 0.4294 | 0.8301 |
| 0.0011 | 237.0 | 5451 | 1.2866 | 0.3481 | 0.5511 | 0.4267 | 0.8291 |
| 0.0011 | 238.0 | 5474 | 1.2914 | 0.3524 | 0.5544 | 0.4309 | 0.8297 |
| 0.0011 | 239.0 | 5497 | 1.2944 | 0.3432 | 0.5522 | 0.4233 | 0.8278 |
| 0.0009 | 240.0 | 5520 | 1.2860 | 0.3470 | 0.5567 | 0.4275 | 0.8308 |
| 0.0009 | 241.0 | 5543 | 1.2786 | 0.3516 | 0.5489 | 0.4286 | 0.8309 |
| 0.0009 | 242.0 | 5566 | 1.2743 | 0.3425 | 0.5511 | 0.4225 | 0.8307 |
| 0.0009 | 243.0 | 5589 | 1.2843 | 0.3467 | 0.5489 | 0.4249 | 0.8285 |
| 0.0009 | 244.0 | 5612 | 1.3106 | 0.3372 | 0.5522 | 0.4187 | 0.8266 |
| 0.0009 | 245.0 | 5635 | 1.2952 | 0.3537 | 0.5456 | 0.4292 | 0.8306 |
| 0.0009 | 246.0 | 5658 | 1.2716 | 0.3503 | 0.5511 | 0.4283 | 0.8340 |
| 0.0009 | 247.0 | 5681 | 1.3108 | 0.3452 | 0.5511 | 0.4245 | 0.8254 |
| 0.0009 | 248.0 | 5704 | 1.3098 | 0.3322 | 0.54 | 0.4113 | 0.8283 |
| 0.0009 | 249.0 | 5727 | 1.2782 | 0.3605 | 0.5544 | 0.4370 | 0.8325 |
| 0.0009 | 250.0 | 5750 | 1.2979 | 0.3449 | 0.5522 | 0.4246 | 0.8294 |
| 0.0009 | 251.0 | 5773 | 1.2919 | 0.3492 | 0.5544 | 0.4285 | 0.8313 |
| 0.0009 | 252.0 | 5796 | 1.2946 | 0.3385 | 0.5533 | 0.4201 | 0.8313 |
| 0.0009 | 253.0 | 5819 | 1.2994 | 0.3439 | 0.5544 | 0.4245 | 0.8312 |
| 0.0009 | 254.0 | 5842 | 1.2962 | 0.3522 | 0.5544 | 0.4307 | 0.8312 |
| 0.0009 | 255.0 | 5865 | 1.3038 | 0.3577 | 0.5556 | 0.4352 | 0.8300 |
| 0.0009 | 256.0 | 5888 | 1.3091 | 0.3492 | 0.5556 | 0.4288 | 0.8315 |
| 0.0009 | 257.0 | 5911 | 1.2943 | 0.3477 | 0.5556 | 0.4277 | 0.8315 |
| 0.0009 | 258.0 | 5934 | 1.3091 | 0.3427 | 0.5556 | 0.4239 | 0.8282 |
| 0.0009 | 259.0 | 5957 | 1.2957 | 0.3391 | 0.5478 | 0.4189 | 0.8325 |
| 0.0009 | 260.0 | 5980 | 1.2927 | 0.3367 | 0.55 | 0.4177 | 0.8293 |
| 0.0008 | 261.0 | 6003 | 1.2926 | 0.3363 | 0.55 | 0.4174 | 0.8309 |
| 0.0008 | 262.0 | 6026 | 1.2889 | 0.3449 | 0.5533 | 0.4249 | 0.8326 |
| 0.0008 | 263.0 | 6049 | 1.3226 | 0.3413 | 0.5522 | 0.4219 | 0.8260 |
| 0.0008 | 264.0 | 6072 | 1.2979 | 0.3550 | 0.5578 | 0.4339 | 0.8284 |
| 0.0008 | 265.0 | 6095 | 1.3032 | 0.3493 | 0.5511 | 0.4276 | 0.8289 |
| 0.0008 | 266.0 | 6118 | 1.3065 | 0.3495 | 0.5533 | 0.4284 | 0.8300 |
| 0.0008 | 267.0 | 6141 | 1.2992 | 0.3524 | 0.5467 | 0.4286 | 0.8316 |
| 0.0008 | 268.0 | 6164 | 1.2990 | 0.3560 | 0.5522 | 0.4329 | 0.8310 |
| 0.0008 | 269.0 | 6187 | 1.3041 | 0.3506 | 0.5489 | 0.4279 | 0.8305 |
| 0.0008 | 270.0 | 6210 | 1.3046 | 0.3468 | 0.5444 | 0.4237 | 0.8306 |
| 0.0008 | 271.0 | 6233 | 1.3001 | 0.3626 | 0.5556 | 0.4388 | 0.8317 |
| 0.0008 | 272.0 | 6256 | 1.3011 | 0.3611 | 0.5533 | 0.4370 | 0.8304 |
| 0.0008 | 273.0 | 6279 | 1.3057 | 0.3454 | 0.5522 | 0.4250 | 0.8297 |
| 0.0008 | 274.0 | 6302 | 1.3142 | 0.3416 | 0.5478 | 0.4208 | 0.8280 |
| 0.0008 | 275.0 | 6325 | 1.2959 | 0.3463 | 0.5544 | 0.4263 | 0.8316 |
| 0.0008 | 276.0 | 6348 | 1.2905 | 0.3512 | 0.5544 | 0.4300 | 0.8324 |
| 0.0008 | 277.0 | 6371 | 1.3094 | 0.3531 | 0.5489 | 0.4298 | 0.8301 |
| 0.0008 | 278.0 | 6394 | 1.3291 | 0.3472 | 0.5456 | 0.4244 | 0.8275 |
| 0.0008 | 279.0 | 6417 | 1.2861 | 0.3545 | 0.5522 | 0.4318 | 0.8332 |
| 0.0008 | 280.0 | 6440 | 1.2926 | 0.3549 | 0.5544 | 0.4328 | 0.8307 |
| 0.0008 | 281.0 | 6463 | 1.2926 | 0.3395 | 0.55 | 0.4198 | 0.8313 |
| 0.0008 | 282.0 | 6486 | 1.2915 | 0.3447 | 0.5511 | 0.4241 | 0.8320 |
| 0.0007 | 283.0 | 6509 | 1.2848 | 0.3524 | 0.5544 | 0.4309 | 0.8327 |
| 0.0007 | 284.0 | 6532 | 1.2948 | 0.3491 | 0.55 | 0.4271 | 0.8301 |
| 0.0007 | 285.0 | 6555 | 1.2945 | 0.3492 | 0.5533 | 0.4282 | 0.8311 |
| 0.0007 | 286.0 | 6578 | 1.2921 | 0.3580 | 0.5544 | 0.4350 | 0.8316 |
| 0.0007 | 287.0 | 6601 | 1.2980 | 0.3462 | 0.5489 | 0.4246 | 0.8305 |
| 0.0007 | 288.0 | 6624 | 1.2962 | 0.3388 | 0.5478 | 0.4187 | 0.8310 |
| 0.0007 | 289.0 | 6647 | 1.2928 | 0.3416 | 0.5478 | 0.4208 | 0.8315 |
| 0.0007 | 290.0 | 6670 | 1.2891 | 0.3503 | 0.55 | 0.4280 | 0.8318 |
| 0.0007 | 291.0 | 6693 | 1.2905 | 0.3587 | 0.5544 | 0.4356 | 0.8317 |
| 0.0007 | 292.0 | 6716 | 1.2954 | 0.3559 | 0.55 | 0.4321 | 0.8312 |
| 0.0007 | 293.0 | 6739 | 1.2948 | 0.3510 | 0.5522 | 0.4292 | 0.8311 |
| 0.0007 | 294.0 | 6762 | 1.2944 | 0.3520 | 0.5522 | 0.4299 | 0.8311 |
| 0.0007 | 295.0 | 6785 | 1.2957 | 0.3499 | 0.5489 | 0.4273 | 0.8310 |
| 0.0007 | 296.0 | 6808 | 1.2960 | 0.3489 | 0.5478 | 0.4263 | 0.8311 |
| 0.0007 | 297.0 | 6831 | 1.2938 | 0.3481 | 0.55 | 0.4264 | 0.8319 |
| 0.0007 | 298.0 | 6854 | 1.2933 | 0.3447 | 0.55 | 0.4238 | 0.8317 |
| 0.0007 | 299.0 | 6877 | 1.2933 | 0.3456 | 0.5511 | 0.4248 | 0.8319 |
| 0.0007 | 300.0 | 6900 | 1.2935 | 0.3471 | 0.55 | 0.4256 | 0.8319 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.0
- Datasets 2.8.0
- Tokenizers 0.12.1
|
Aleksandar/distilbert-srb-ner
|
[
"pytorch",
"distilbert",
"token-classification",
"sr",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | 2023-03-17T09:15:27Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: RecurrentPPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 100.00 +/- 83.79
name: mean_reward
verified: false
---
# **RecurrentPPO** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **RecurrentPPO** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo_lstm --env SpaceInvadersNoFrameskip-v4 -orga dussinus -f logs/
python -m rl_zoo3.enjoy --algo ppo_lstm --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo_lstm --env SpaceInvadersNoFrameskip-v4 -orga dussinus -f logs/
python -m rl_zoo3.enjoy --algo ppo_lstm --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo_lstm --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo_lstm --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga dussinus
```
## Hyperparameters
```python
OrderedDict([('batch_size', 512),
('clip_range', 'lin_0.1'),
('ent_coef', 0.012),
('env_wrapper',
[{'stable_baselines3.common.atari_wrappers.AtariWrapper': {'terminal_on_life_loss': False}}]),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 6),
('n_epochs', 4),
('n_steps', 256),
('n_timesteps', 5000000.0),
('policy', 'CnnLstmPolicy'),
('policy_kwargs',
'dict(enable_critic_lstm=False, lstm_hidden_size=128, )'),
('vf_coef', 0.5),
('normalize', False)])
```
|
AlekseyKulnevich/Pegasus-Summarization
|
[
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"PegasusForConditionalGeneration"
],
"model_type": "pegasus",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.61 +/- 0.16
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AlexeyYazev/my-awesome-model
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-finetuned-paper
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-paper
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1620
- Precision: 0.7605
- Recall: 0.8141
- F1: 0.7864
- Accuracy: 0.9765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 73 | 0.1575 | 0.6484 | 0.5673 | 0.6051 | 0.9591 |
| No log | 2.0 | 146 | 0.0964 | 0.6723 | 0.7628 | 0.7147 | 0.9718 |
| No log | 3.0 | 219 | 0.1233 | 0.6447 | 0.7853 | 0.7081 | 0.9655 |
| No log | 4.0 | 292 | 0.1153 | 0.7563 | 0.7660 | 0.7611 | 0.9737 |
| No log | 5.0 | 365 | 0.1194 | 0.7265 | 0.8173 | 0.7692 | 0.9727 |
| No log | 6.0 | 438 | 0.1243 | 0.7286 | 0.8173 | 0.7704 | 0.9722 |
| 0.0974 | 7.0 | 511 | 0.1406 | 0.7202 | 0.7756 | 0.7469 | 0.9732 |
| 0.0974 | 8.0 | 584 | 0.1436 | 0.7406 | 0.7596 | 0.7500 | 0.9706 |
| 0.0974 | 9.0 | 657 | 0.1687 | 0.7524 | 0.7596 | 0.7560 | 0.9738 |
| 0.0974 | 10.0 | 730 | 0.1591 | 0.7394 | 0.7821 | 0.7601 | 0.9743 |
| 0.0974 | 11.0 | 803 | 0.1431 | 0.7619 | 0.8205 | 0.7901 | 0.9754 |
| 0.0974 | 12.0 | 876 | 0.1487 | 0.7477 | 0.7981 | 0.7721 | 0.9745 |
| 0.0974 | 13.0 | 949 | 0.1512 | 0.7764 | 0.8013 | 0.7886 | 0.9763 |
| 0.0043 | 14.0 | 1022 | 0.1532 | 0.7645 | 0.8013 | 0.7825 | 0.9754 |
| 0.0043 | 15.0 | 1095 | 0.1531 | 0.7720 | 0.8141 | 0.7925 | 0.9761 |
| 0.0043 | 16.0 | 1168 | 0.1590 | 0.7635 | 0.8173 | 0.7895 | 0.9756 |
| 0.0043 | 17.0 | 1241 | 0.1615 | 0.7559 | 0.8237 | 0.7883 | 0.9754 |
| 0.0043 | 18.0 | 1314 | 0.1624 | 0.7612 | 0.8173 | 0.7883 | 0.9759 |
| 0.0043 | 19.0 | 1387 | 0.1622 | 0.7574 | 0.8205 | 0.7877 | 0.9763 |
| 0.0043 | 20.0 | 1460 | 0.1620 | 0.7605 | 0.8141 | 0.7864 | 0.9765 |
### Framework versions
- Transformers 4.27.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.