modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
AbdulmalikAdeyemo/wav2vec2-large-xls-r-300m-hausa | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.39 +/- 28.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AdapterHub/roberta-base-pf-winogrande | [
"roberta",
"en",
"dataset:winogrande",
"arxiv:2104.08247",
"adapter-transformers",
"adapterhub:comsense/winogrande"
]
| null | {
"architectures": null,
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
---
Hello.
This model is a finetune of a specific widely-available anime danbooru-based stable diffusion model.
It was trained on 62 pieces of artworks from a game called "Black Souls", created by Sushi Yuusha Toro.
Here is a preview of the style you should expect from this model with minimal effort prompt editing.

Exif data should be included in the image above, place it in the PNG info tab in stable-diffusion-webui to get a starting point on how to prompt for this model.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
AethiQs-Max/AethiQs_GemBERT_bertje_50k | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.44 +/- 0.15
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AethiQs-Max/aethiqs-base_bertje-data_rotterdam-epochs_10 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ludsil/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AethiQs-Max/s3-v1-20_epochs | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2023-01-19T05:34:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: fine-tuned-five-classes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-five-classes
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2424
- F1: 0.8905
- Roc Auc: 0.9138
- Accuracy: 0.6825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 250 | 0.2669 | 0.8759 | 0.9008 | 0.6525 |
| 0.3273 | 2.0 | 500 | 0.2424 | 0.8905 | 0.9138 | 0.6825 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.1+cpu
- Datasets 2.8.0
- Tokenizers 0.12.1
|
Aftabhussain/Tomato_Leaf_Classifier | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index",
"autotrain_compatible"
]
| image-classification | {
"architectures": [
"ViTForImageClassification"
],
"model_type": "vit",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 50 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Stalle1.1 Dreambooth model trained by darkvibes with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Ahmedahmed/Wewe | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Ahren09/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### stalle-2 Dreambooth model trained by darkvibes with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
AimB/konlpy_berttokenizer_helsinki | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- roberta
- adapter-transformers
datasets:
- glue
---
# Adapter `WillHeld/pfadapter-roberta-base-tada-adv-aave-contrast` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [glue](https://huggingface.co/datasets/glue/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("WillHeld/pfadapter-roberta-base-tada-adv-aave-contrast", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
AimB/mT5-en-kr-opus | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 666.50 +/- 197.37
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SatishBethi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SatishBethi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga SatishBethi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Akashpb13/Central_kurdish_xlsr | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ckb",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-fine-tuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5886969865896993
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8539
- Matthews Correlation: 0.5887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4699 | 1.0 | 1069 | 0.5752 | 0.4751 |
| 0.2998 | 2.0 | 2138 | 0.6983 | 0.5554 |
| 0.1879 | 3.0 | 3207 | 0.8539 | 0.5887 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Akashpb13/xlsr_hungarian_new | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hu",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | As cool as it might sounds, this model only borrow AOM recipes. No AOM in here at all. |
Akashpb13/xlsr_maltese_wav2vec2 | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"mt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ranajoy98/autotrain-data-contract-new-classifier-19thjan
co2_eq_emissions:
emissions: 5.453836274077357
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2958385563
- CO2 Emissions (in grams): 5.4538
## Validation Metrics
- Loss: 0.159
- Accuracy: 0.965
- Macro F1: 0.964
- Micro F1: 0.965
- Weighted F1: 0.965
- Macro Precision: 0.964
- Micro Precision: 0.965
- Weighted Precision: 0.965
- Macro Recall: 0.964
- Micro Recall: 0.965
- Weighted Recall: 0.965
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ranajoy98/autotrain-contract-new-classifier-19thjan-2958385563
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ranajoy98/autotrain-contract-new-classifier-19thjan-2958385563", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ranajoy98/autotrain-contract-new-classifier-19thjan-2958385563", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Akjder/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 422.20 +/- 77.66
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AkshatSurolia/ConvNeXt-FaceMask-Finetuned | [
"pytorch",
"safetensors",
"convnext",
"image-classification",
"dataset:Face-Mask18K",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| image-classification | {
"architectures": [
"ConvNextForImageClassification"
],
"model_type": "convnext",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 56 | null | ---
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3790
- Accuracy: 0.4297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3752 | 0.3716 |
| 1.4054 | 2.0 | 776 | 1.2843 | 0.4335 |
| 1.0478 | 3.0 | 1164 | 1.3790 | 0.4297 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
AkshatSurolia/DeiT-FaceMask-Finetuned | [
"pytorch",
"deit",
"image-classification",
"dataset:Face-Mask18K",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| image-classification | {
"architectures": [
"DeiTForImageClassification"
],
"model_type": "deit",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 46 | 2023-01-19T08:48:29Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1482.74 +/- 360.65
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AkshaySg/langid | [
"multilingual",
"dataset:VoxLingua107",
"speechbrain",
"audio-classification",
"embeddings",
"Language",
"Identification",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"VoxLingua107",
"license:apache-2.0"
]
| audio-classification | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | 2023-01-20T04:24:33Z | ---
tags:
- generated_from_trainer
model-index:
- name: lilt-ruroberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-ruroberta
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4919
- Comment: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6}
- Date: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3}
- Labname: {'precision': 0.5833333333333334, 'recall': 0.6666666666666666, 'f1': 0.6222222222222222, 'number': 21}
- Laboratory: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}
- Measure: {'precision': 0.5833333333333334, 'recall': 0.7777777777777778, 'f1': 0.6666666666666666, 'number': 9}
- Ref Value: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 8}
- Result: {'precision': 0.25, 'recall': 0.25, 'f1': 0.25, 'number': 12}
- Overall Precision: 0.4528
- Overall Recall: 0.4
- Overall F1: 0.4248
- Overall Accuracy: 0.8698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Comment | Date | Labname | Laboratory | Measure | Ref Value | Result | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------:|:---------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------:|:------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------:|:------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 2.4398 | 5.0 | 5 | 1.5928 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 21} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 8} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.0 | 0.0 | 0.0 | 0.5850 |
| 1.4788 | 10.0 | 10 | 1.1857 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 21} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 8} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.0 | 0.0 | 0.0 | 0.6512 |
| 0.9806 | 15.0 | 15 | 0.8188 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.21875, 'recall': 0.3333333333333333, 'f1': 0.2641509433962264, 'number': 21} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.5, 'recall': 0.1111111111111111, 'f1': 0.1818181818181818, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 8} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.1667 | 0.1333 | 0.1481 | 0.7660 |
| 0.6358 | 20.0 | 20 | 0.5763 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.41935483870967744, 'recall': 0.6190476190476191, 'f1': 0.5, 'number': 21} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.7, 'recall': 0.7777777777777778, 'f1': 0.7368421052631577, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 8} | {'precision': 0.42857142857142855, 'recall': 0.25, 'f1': 0.3157894736842105, 'number': 12} | 0.4182 | 0.3833 | 0.4 | 0.8675 |
| 0.4712 | 25.0 | 25 | 0.4919 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.5833333333333334, 'recall': 0.6666666666666666, 'f1': 0.6222222222222222, 'number': 21} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.5833333333333334, 'recall': 0.7777777777777778, 'f1': 0.6666666666666666, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 8} | {'precision': 0.25, 'recall': 0.25, 'f1': 0.25, 'number': 12} | 0.4528 | 0.4 | 0.4248 | 0.8698 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
AlanDev/dall-e-better | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-19T09:15:15Z | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: AkeyLegalBert_inScotus_and_Ledgar_14epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AkeyLegalBert_inScotus_and_Ledgar_14epoch
This model is a fine-tuned version of [hatemestinbejaia/AkeyLegalBert_inScotus_and_Ledgar](https://huggingface.co/hatemestinbejaia/AkeyLegalBert_inScotus_and_Ledgar) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.3388 | 1.0 | 9878 | 3.5630 |
| 3.2344 | 2.0 | 19756 | 3.5443 |
| 3.2459 | 3.0 | 29634 | 3.4798 |
| 3.2394 | 4.0 | 39512 | 3.4407 |
| 3.2801 | 5.0 | 49390 | 3.4104 |
| 3.2772 | 6.0 | 59268 | 3.3571 |
| 3.3636 | 7.0 | 69146 | 3.3350 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Aleenbo/Arcane | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: nachshonc/SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Aleksandar/bert-srb-base-cased-oscar | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: shovall/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Aleksandar/bert-srb-ner-setimes-lr | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: arnonl/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Aleksandar/bert-srb-ner | [
"pytorch",
"bert",
"token-classification",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2023-01-19T09:38:26Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 224.72 +/- 80.62
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Aleksandar/distilbert-srb-base-cased-oscar | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2023-01-19T09:38:40Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.52 +/- 0.27
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Aleksandar/distilbert-srb-ner-setimes | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: orenk/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Aleksandar/distilbert-srb-ner | [
"pytorch",
"distilbert",
"token-classification",
"sr",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: orenk/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Aleksandar/electra-srb-ner-setimes | [
"pytorch",
"electra",
"token-classification",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"ElectraForTokenClassification"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: farukbuldur/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Aleksandar/electra-srb-ner | [
"pytorch",
"safetensors",
"electra",
"token-classification",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"ElectraForTokenClassification"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Vis03al/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Alessandro/model_name | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-19T10:44:30Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11610 with parameters:
```
{'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 11610,
"warmup_steps": 1161,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 4096, 'do_lower_case': False}) with Transformer model: LongformerModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
AlexN/xls-r-300m-pt | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"robust-speech-event",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: stelladk/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AlexaMerens/Owl | [
"license:cc"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-19T11:07:12Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('slavadubrov/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
AlgoveraAI/dcgan | [
"pytorch",
"transformers"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags:
- generated_from_trainer
model-index:
- name: tiny-mlm-rotten_tomatoes-from-scratch-custom-tokenizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-rotten_tomatoes-from-scratch-custom-tokenizer
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.51 | 0.47 | 500 | 8.8449 |
| 8.2514 | 0.94 | 1000 | 8.1705 |
| 7.6633 | 1.41 | 1500 | 7.8679 |
| 7.5673 | 1.87 | 2000 | 7.9401 |
| 7.445 | 2.34 | 2500 | 7.8483 |
| 7.3703 | 2.81 | 3000 | 7.8663 |
| 7.3972 | 3.28 | 3500 | 7.9280 |
| 7.3585 | 3.75 | 4000 | 7.9241 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
AliPotter24/a | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
model-index:
- name: tiny-mlm-wikitext-from-scratch-custom-tokenizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-wikitext-from-scratch-custom-tokenizer
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.8293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.4643 | 0.4 | 500 | 8.8756 |
| 8.3543 | 0.8 | 1000 | 8.2034 |
| 7.8651 | 1.2 | 1500 | nan |
| 7.7169 | 1.6 | 2000 | 7.9480 |
| 7.6861 | 2.0 | 2500 | 7.9370 |
| 7.6117 | 2.4 | 3000 | 7.9070 |
| 7.6402 | 2.8 | 3500 | 7.9129 |
| 7.6067 | 3.2 | 4000 | nan |
| 7.5826 | 3.6 | 4500 | 7.8070 |
| 7.5554 | 4.0 | 5000 | 7.8293 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
Alireza1044/albert-base-v2-mnli | [
"pytorch",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 235 | null | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 749.67 +/- 58.57
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Alireza1044/albert-base-v2-mrpc | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 204 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9330467845924947
- name: Recall
type: recall
value: 0.9498485358465163
- name: F1
type: f1
value: 0.9413726961888084
- name: Accuracy
type: accuracy
value: 0.9860628716077
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0636
- Precision: 0.9330
- Recall: 0.9498
- F1: 0.9414
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0901 | 1.0 | 1756 | 0.0696 | 0.9166 | 0.9325 | 0.9245 | 0.9815 |
| 0.0366 | 2.0 | 3512 | 0.0632 | 0.9324 | 0.9493 | 0.9408 | 0.9857 |
| 0.0178 | 3.0 | 5268 | 0.0636 | 0.9330 | 0.9498 | 0.9414 | 0.9861 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Alireza1044/albert-base-v2-qnli | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 41 | null | ---
tags:
- generated_from_trainer
model-index:
- name: small-mlm-rotten_tomatoes-from-scratch-custom-tokenizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-rotten_tomatoes-from-scratch-custom-tokenizer
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.6694
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.1892 | 0.47 | 500 | 7.9116 |
| 7.5271 | 0.94 | 1000 | 7.8502 |
| 7.3359 | 1.41 | 1500 | 7.6451 |
| 7.3365 | 1.87 | 2000 | 7.7659 |
| 7.1853 | 2.34 | 2500 | 7.6368 |
| 7.0682 | 2.81 | 3000 | 7.6640 |
| 7.0894 | 3.28 | 3500 | 7.7055 |
| 7.0172 | 3.75 | 4000 | 7.6694 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
Alireza1044/albert-base-v2-sst2 | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 52 | 2023-01-19T11:54:22Z | ---
tags:
- generated_from_trainer
model-index:
- name: tiny-mlm-snli-from-scratch-custom-tokenizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-snli-from-scratch-custom-tokenizer
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.5662 | 0.4 | 500 | 6.9015 |
| 6.244 | 0.8 | 1000 | 6.2808 |
| 5.7261 | 1.2 | 1500 | 6.0693 |
| 5.5705 | 1.6 | 2000 | 6.1026 |
| 5.4875 | 2.0 | 2500 | 6.0050 |
| 5.3792 | 2.4 | 3000 | 5.9327 |
| 5.318 | 2.8 | 3500 | 5.9083 |
| 5.294 | 3.2 | 4000 | 5.8751 |
| 5.2403 | 3.6 | 4500 | 5.8573 |
| 5.1567 | 4.0 | 5000 | 5.8653 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
Alireza1044/dwight_bert_lm | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: lsaulier/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AlirezaBaneshi/testPersianQA | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
tags:
- generated_from_trainer
model-index:
- name: small-mlm-wikitext-from-scratch-custom-tokenizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-wikitext-from-scratch-custom-tokenizer
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.4613
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.2616 | 0.4 | 500 | 8.0137 |
| 7.6832 | 0.8 | 1000 | 7.8660 |
| 7.586 | 1.2 | 1500 | nan |
| 7.4953 | 1.6 | 2000 | 7.6839 |
| 7.4399 | 2.0 | 2500 | 7.6496 |
| 7.3358 | 2.4 | 3000 | 7.5908 |
| 7.3526 | 2.8 | 3500 | 7.5918 |
| 7.2773 | 3.2 | 4000 | nan |
| 7.2433 | 3.6 | 4500 | 7.4326 |
| 7.2167 | 4.0 | 5000 | 7.4613 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
Aliskin/xlm-roberta-base-finetuned-marc | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
datasets:
- xnli
language:
- de
metrics:
- accuracy
pipeline_tag: zero-shot-classification
---
# XLM-ROBERTA-BASE-XNLI
## Model description
This model takes the XLM-Roberta-base model which has been continued to pre-traine on a large corpus of Twitter in multiple languages.
It was developed following a similar strategy as introduced as part of the [Tweet Eval](https://github.com/cardiffnlp/tweeteval) framework.
The model is further finetuned on all of the languages of the XNLI train set
## Intended Usage
This model was developed to do Zero-Shot Text Classification in the realm of Hate Speech Detection. It is finetuned on the whole xnli train set containing 15 different languages like:
**ar, bg ,de , en, el , es, fr, hi, ru, sw, th, tr, ur, vi, zh**
Since the base model was pre-trained on 100 different languages it has shown some effectiveness in other languages. Please refer to the list of languages in the [XLM Roberta paper](https://arxiv.org/abs/1911.02116)
### Usage with Zero-Shot Classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="morit/XLM-T-full-xnli")
```
## Training
This model was pre-trained on a set of 100 languages and follwed further training on 198M multilingual tweets as described in the original [paper](https://arxiv.org/abs/2104.12250). Further it was trained on the full train set of XNLI dataset which is a machine translated version of the MNLI dataset. It was trained on 5 epochs of the XNLI train set and evaluated on the XNLI eval dataset at the end of every epoch to find the best performing model. The model which had the highest accuracy on the eval set was chosen at the end.

- learning rate: 2e-5
- batch size: 32
- max sequence: length 128
using a GPU (NVIDIA GeForce RTX 3090)
# Evaluation
The model was evaluated on all the test sets of the xnli dataset resulting in the following accuracies:
| ar | bg | de | el | en | es | fr | hi| ru | sw | th | tr |ur | vi | zh |
|-----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|
| 0.749 | 0.787 | 0.774 | 0.774 | 0.831 | 0.796 | 0.785 | 0.734 | 0.761 | 0.701 | 0.757 | 0.758 | 0.704 | 0.778 | 0.774 |
|
Amrrs/wav2vec2-large-xlsr-53-tamil | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ta",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index",
"has_space"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | 2023-01-19T13:22:01Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: misza222/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Andranik/TestQaV1 | [
"pytorch",
"rust",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nandysoham/Dell-theme-finetuned-overfinetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham/Dell-theme-finetuned-overfinetuned
This model is a fine-tuned version of [nandysoham/distilbert-base-uncased-finetuned-squad](https://huggingface.co/nandysoham/distilbert-base-uncased-finetuned-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4305
- Train End Logits Accuracy: 0.7857
- Train Start Logits Accuracy: 0.8006
- Validation Loss: 2.3316
- Validation End Logits Accuracy: 0.1647
- Validation Start Logits Accuracy: 0.2118
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 210, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.5691 | 0.5179 | 0.5119 | 1.2093 | 0.4588 | 0.4588 | 0 |
| 0.9333 | 0.6101 | 0.5833 | 1.2828 | 0.3176 | 0.3647 | 1 |
| 0.7924 | 0.6042 | 0.5982 | 1.4627 | 0.2824 | 0.2824 | 2 |
| 0.6858 | 0.6905 | 0.6786 | 1.5630 | 0.3059 | 0.2941 | 3 |
| 0.6562 | 0.6518 | 0.6815 | 1.7647 | 0.2235 | 0.2118 | 4 |
| 0.5996 | 0.7054 | 0.6994 | 2.0109 | 0.2118 | 0.2471 | 5 |
| 0.5277 | 0.7440 | 0.7589 | 2.1286 | 0.1765 | 0.2000 | 6 |
| 0.4810 | 0.7679 | 0.7798 | 2.2263 | 0.1529 | 0.2000 | 7 |
| 0.4488 | 0.8036 | 0.7887 | 2.2999 | 0.1529 | 0.1882 | 8 |
| 0.4305 | 0.7857 | 0.8006 | 2.3316 | 0.1647 | 0.2118 | 9 |
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.9.2
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Andrey1989/mbart-finetuned-en-to-kk | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
- finance
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finbert-tone-finetuned-finance-text-classification
results: []
datasets:
- nickmuchi/financial-text-combo-classification
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finbert-tone-finetuned-finance-text-classification
This model is a fine-tuned version of [yiyanghkust/finbert-tone](https://huggingface.co/yiyanghkust/finbert-tone) on the [nickmuchi/financial-text-combo-classification](https://huggingface.co/datasets/nickmuchi/financial-text-combo-classification) dataset which is a combined dataset of financial_phrasebank,FinanceInc/auditor_sentiment and zeroshot/twitter-financial-news-sentiment.
It achieves the following results on the evaluation set:
- Loss: 0.6645
- Accuracy: 0.9097
- F1: 0.9102
- Precision: 0.9110
- Recall: 0.9097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 141 | 0.3934 | 0.8431 | 0.8427 | 0.8456 | 0.8431 |
| No log | 2.0 | 282 | 0.3214 | 0.8843 | 0.8843 | 0.8867 | 0.8843 |
| No log | 3.0 | 423 | 0.3302 | 0.8882 | 0.8902 | 0.8965 | 0.8882 |
| 0.4444 | 4.0 | 564 | 0.3611 | 0.8980 | 0.8993 | 0.9026 | 0.8980 |
| 0.4444 | 5.0 | 705 | 0.4006 | 0.8975 | 0.8987 | 0.9014 | 0.8975 |
| 0.4444 | 6.0 | 846 | 0.4517 | 0.9037 | 0.9043 | 0.9057 | 0.9037 |
| 0.4444 | 7.0 | 987 | 0.5324 | 0.9027 | 0.9035 | 0.9057 | 0.9027 |
| 0.0406 | 8.0 | 1128 | 0.5308 | 0.9063 | 0.9074 | 0.9098 | 0.9063 |
| 0.0406 | 9.0 | 1269 | 0.5586 | 0.9081 | 0.9084 | 0.9089 | 0.9081 |
| 0.0406 | 10.0 | 1410 | 0.5783 | 0.9076 | 0.9080 | 0.9086 | 0.9076 |
| 0.0121 | 11.0 | 1551 | 0.5741 | 0.9115 | 0.9116 | 0.9121 | 0.9115 |
| 0.0121 | 12.0 | 1692 | 0.6288 | 0.9104 | 0.9108 | 0.9115 | 0.9104 |
| 0.0121 | 13.0 | 1833 | 0.6328 | 0.9050 | 0.9059 | 0.9078 | 0.9050 |
| 0.0121 | 14.0 | 1974 | 0.6887 | 0.9042 | 0.9054 | 0.9088 | 0.9042 |
| 0.0063 | 15.0 | 2115 | 0.6345 | 0.9086 | 0.9094 | 0.9109 | 0.9086 |
| 0.0063 | 16.0 | 2256 | 0.6545 | 0.9102 | 0.9103 | 0.9108 | 0.9102 |
| 0.0063 | 17.0 | 2397 | 0.6585 | 0.9086 | 0.9092 | 0.9103 | 0.9086 |
| 0.0033 | 18.0 | 2538 | 0.6676 | 0.9081 | 0.9087 | 0.9098 | 0.9081 |
| 0.0033 | 19.0 | 2679 | 0.6614 | 0.9110 | 0.9113 | 0.9119 | 0.9110 |
| 0.0033 | 20.0 | 2820 | 0.6645 | 0.9097 | 0.9102 | 0.9110 | 0.9097 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2 |
Andrija/SRoBERTa | [
"pytorch",
"roberta",
"fill-mask",
"hr",
"sr",
"multilingual",
"dataset:leipzig",
"transformers",
"masked-lm",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 88 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Ann2020/model-finetuned-ner | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: blghtr/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AnonymousSub/AR_rule_based_roberta_bert_triplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2023-01-19T15:53:50Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- sst2
metrics:
- accuracy
model-index:
- name: '42'
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9254587155963303
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 42
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3109
- Accuracy: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: not_parallel
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 2105 | 0.2167 | 0.9232 |
| 0.2049 | 2.0 | 4210 | 0.2375 | 0.9278 |
| 0.123 | 3.0 | 6315 | 0.2636 | 0.9243 |
| 0.0839 | 4.0 | 8420 | 0.2865 | 0.9243 |
| 0.058 | 5.0 | 10525 | 0.3109 | 0.9255 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 2.7.1
- Tokenizers 0.11.6
|
AnonymousSub/AR_rule_based_roberta_bert_triplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2023-01-19T15:57:16Z | This model is based on a custom Transformer model that can be installed with:
```bash
pip install git+https://github.com/lucadiliello/bleurt-pytorch.git
```
Now load the model and make predictions with:
```python
import torch
from bleurt_pytorch import BleurtConfig, BleurtForSequenceClassification, BleurtTokenizer
config = BleurtConfig.from_pretrained('lucadiliello/bleurt-tiny-128')
model = BleurtForSequenceClassification.from_pretrained('lucadiliello/bleurt-tiny-128')
tokenizer = BleurtTokenizer.from_pretrained('lucadiliello/bleurt-tiny-128')
references = ["a bird chirps by the window", "this is a random sentence"]
candidates = ["a bird chirps by the window", "this looks like a random sentence"]
model.eval()
with torch.no_grad():
inputs = tokenizer(references, candidates, padding='longest', return_tensors='pt')
res = model(**inputs).logits.flatten().tolist()
print(res)
# [0.7669461369514465, 0.6060263514518738]
```
Take a look at this [repository](https://github.com/lucadiliello/bleurt-pytorch) for the definition of `BleurtConfig`, `BleurtForSequenceClassification` and `BleurtTokenizer` in PyTorch. |
AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="staycoolish/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 31.10 +/- 15.29
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AnonymousSub/AR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="staycoolish/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AnonymousSub/AR_rule_based_twostage_quadruplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- bionlp2004
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: bionlp2004
type: bionlp2004
config: bionlp2004
split: train
args: bionlp2004
metrics:
- name: Precision
type: precision
value: 0.7522050257946413
- name: Recall
type: recall
value: 0.8139744282369891
- name: F1
type: f1
value: 0.781871648503719
- name: Accuracy
type: accuracy
value: 0.9379251370155868
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the bionlp2004 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2098
- Precision: 0.7522
- Recall: 0.8140
- F1: 0.7819
- Accuracy: 0.9379
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2255 | 1.0 | 2078 | 0.2073 | 0.7080 | 0.7877 | 0.7457 | 0.9305 |
| 0.1709 | 2.0 | 4156 | 0.1995 | 0.7479 | 0.8106 | 0.7780 | 0.9364 |
| 0.1324 | 3.0 | 6234 | 0.2098 | 0.7522 | 0.8140 | 0.7819 | 0.9379 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
AnonymousSub/AR_rule_based_twostagetriplet_hier_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
AnonymousSub/SR_cline | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: blghtr/ppo-pyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AnonymousSub/SR_consert | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- safetensors
inference: true
---
## Description

Maxwell the Cat Diffusion is a latent text-to-image diffusion model based on the original CompVis Stable Diffusion v1.5 and then fine-tuned on 5 images based on the 'Maxwell the Cat' meme originating from the modded Half Life 2 video.
To use this gorgeous object in your generations, add `maxwell the cat` to the prompts.
## Dreambooth hyperparameters
```sh
export MODEL_NAME="runwayml/stable-diffusion-v1-5"
export INSTANCE_DIR="/home/kabachuha/kml/datasets/objects/maxwell"
export CLASS_DIR="/home/kabachuha/kml/datasets/objects/maxwell_class"
export OUTPUT_DIR="/home/kabachuha/kml/maxwell/"
accelerate launch train_dreambooth.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--class_data_dir=$CLASS_DIR \
--output_dir=$OUTPUT_DIR \
--with_prior_preservation --prior_loss_weight=1.0 \
--instance_prompt="maxwell the cat" \
--class_prompt="3d model of a black cat, lowpoly" \
--resolution=512 \
--train_batch_size=1 \
--gradient_accumulation_steps=1 \
--learning_rate=1e-6 \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--num_class_images=200 \
--max_train_steps=800 \
--mixed_precision 'no' \
--train_text_encoder \
--checkpointing_steps 1200
```
The dataset link https://drive.google.com/drive/folders/1nd8NHrwuu_VKHaU8iBFw95w1iqfmhNTo?usp=share_link
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
You can't use the model to deliberately produce nor share illegal or harmful outputs or content
The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license https://huggingface.co/stabilityai/stable-diffusion-2
## Downstream Uses
This model can be used for entertainment purposes and as a generative art assistant.
|
AnonymousSub/SR_rule_based_roberta_bert_triplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Rocketknight1/my_food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/my_food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1099
- Validation Loss: 0.2439
- Train Accuracy: 0.947
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.5330 | 1.3738 | 0.923 | 0 |
| 0.8871 | 0.6131 | 0.95 | 1 |
| 0.3703 | 0.4042 | 0.937 | 2 |
| 0.1942 | 0.2981 | 0.94 | 3 |
| 0.1099 | 0.2439 | 0.947 | 4 |
### Framework versions
- Transformers 4.26.0.dev0
- TensorFlow 2.11.0
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
AnonymousSub/SR_rule_based_roberta_bert_triplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 2388.90 +/- 100.02
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/SR_rule_based_roberta_hier_quadruplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 114 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 114,
"warmup_steps": 12,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
AnonymousSub/SR_rule_based_roberta_hier_triplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
library_name: xpmir
---
SPLADE models from https://github.com/naver/splade adapted for
[experimaestro IR](https://experimaestro-ir.readthedocs.io/en/stable/).
To use them, you need the `experimaestro-ir` library, and refer to
[the documentation](https://experimaestro-ir.readthedocs.io/en/stable/pretrained.html).
Variants:
- `cocondenser-selfdistil`
- `cocondenser-ensembledistil`
- `efficient-V-large-doc`
- `efficient-VI-BT-large-doc` |
AnonymousSub/SR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2023-01-19T18:15:41Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: jrnold/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AnonymousSub/SR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2023-01-19T18:16:57Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.84 +/- 1.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/SR_rule_based_twostage_quadruplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: other
tags:
- generated_from_keras_callback
model-index:
- name: MariaK/scene_segmentation
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaK/scene_segmentation
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.7868
- Validation Loss: 3.0539
- Validation Mean Iou: 0.0483
- Validation Mean Accuracy: 0.0908
- Validation Overall Accuracy: 0.3160
- Validation Accuracy Wall: 0.1166
- Validation Accuracy Building: 0.4198
- Validation Accuracy Sky: 0.1940
- Validation Accuracy Floor: 0.9531
- Validation Accuracy Tree: 0.3920
- Validation Accuracy Ceiling: 0.4672
- Validation Accuracy Road: 0.1731
- Validation Accuracy Bed : 0.3648
- Validation Accuracy Windowpane: 0.1441
- Validation Accuracy Grass: 0.0217
- Validation Accuracy Cabinet: 0.0857
- Validation Accuracy Sidewalk: nan
- Validation Accuracy Person: nan
- Validation Accuracy Earth: 0.0
- Validation Accuracy Door: 0.0
- Validation Accuracy Table: 0.0
- Validation Accuracy Mountain: 0.0
- Validation Accuracy Plant: 0.2078
- Validation Accuracy Curtain: 0.0
- Validation Accuracy Chair: 0.0
- Validation Accuracy Car: nan
- Validation Accuracy Water: 0.0
- Validation Accuracy Painting: 0.0
- Validation Accuracy Sofa: nan
- Validation Accuracy Shelf: nan
- Validation Accuracy House: nan
- Validation Accuracy Sea: nan
- Validation Accuracy Mirror: nan
- Validation Accuracy Rug: 0.0
- Validation Accuracy Field: 0.0
- Validation Accuracy Armchair: nan
- Validation Accuracy Seat: nan
- Validation Accuracy Fence: 0.0
- Validation Accuracy Desk: 0.0
- Validation Accuracy Rock: 0.0
- Validation Accuracy Wardrobe: 0.0
- Validation Accuracy Lamp: nan
- Validation Accuracy Bathtub: nan
- Validation Accuracy Railing: 0.0
- Validation Accuracy Cushion: nan
- Validation Accuracy Base: nan
- Validation Accuracy Box: nan
- Validation Accuracy Column: nan
- Validation Accuracy Signboard: nan
- Validation Accuracy Chest of drawers: 0.0
- Validation Accuracy Counter: 0.0
- Validation Accuracy Sand: nan
- Validation Accuracy Sink: nan
- Validation Accuracy Skyscraper: 0.0
- Validation Accuracy Fireplace: nan
- Validation Accuracy Refrigerator: nan
- Validation Accuracy Grandstand: nan
- Validation Accuracy Path: nan
- Validation Accuracy Stairs: 0.0
- Validation Accuracy Runway: 0.0
- Validation Accuracy Case: nan
- Validation Accuracy Pool table: nan
- Validation Accuracy Pillow: nan
- Validation Accuracy Screen door: nan
- Validation Accuracy Stairway: nan
- Validation Accuracy River: nan
- Validation Accuracy Bridge: nan
- Validation Accuracy Bookcase: nan
- Validation Accuracy Blind: nan
- Validation Accuracy Coffee table: nan
- Validation Accuracy Toilet: nan
- Validation Accuracy Flower: nan
- Validation Accuracy Book: 0.0
- Validation Accuracy Hill: nan
- Validation Accuracy Bench: 0.0
- Validation Accuracy Countertop: nan
- Validation Accuracy Stove: nan
- Validation Accuracy Palm: nan
- Validation Accuracy Kitchen island: nan
- Validation Accuracy Computer: nan
- Validation Accuracy Swivel chair: 0.0
- Validation Accuracy Boat: 0.0
- Validation Accuracy Bar: nan
- Validation Accuracy Arcade machine: nan
- Validation Accuracy Hovel: nan
- Validation Accuracy Bus: nan
- Validation Accuracy Towel: nan
- Validation Accuracy Light: 0.0
- Validation Accuracy Truck: 0.0
- Validation Accuracy Tower: 0.0
- Validation Accuracy Chandelier: nan
- Validation Accuracy Awning: nan
- Validation Accuracy Streetlight: nan
- Validation Accuracy Booth: nan
- Validation Accuracy Television receiver: nan
- Validation Accuracy Airplane: nan
- Validation Accuracy Dirt track: nan
- Validation Accuracy Apparel: nan
- Validation Accuracy Pole: nan
- Validation Accuracy Land: nan
- Validation Accuracy Bannister: nan
- Validation Accuracy Escalator: nan
- Validation Accuracy Ottoman: nan
- Validation Accuracy Bottle: nan
- Validation Accuracy Buffet: nan
- Validation Accuracy Poster: nan
- Validation Accuracy Stage: nan
- Validation Accuracy Van: nan
- Validation Accuracy Ship: nan
- Validation Accuracy Fountain: nan
- Validation Accuracy Conveyer belt: nan
- Validation Accuracy Canopy: nan
- Validation Accuracy Washer: nan
- Validation Accuracy Plaything: nan
- Validation Accuracy Swimming pool: nan
- Validation Accuracy Stool: nan
- Validation Accuracy Barrel: nan
- Validation Accuracy Basket: nan
- Validation Accuracy Waterfall: nan
- Validation Accuracy Tent: nan
- Validation Accuracy Bag: nan
- Validation Accuracy Minibike: nan
- Validation Accuracy Cradle: nan
- Validation Accuracy Oven: nan
- Validation Accuracy Ball: nan
- Validation Accuracy Food: nan
- Validation Accuracy Step: nan
- Validation Accuracy Tank: nan
- Validation Accuracy Trade name: nan
- Validation Accuracy Microwave: nan
- Validation Accuracy Pot: nan
- Validation Accuracy Animal: nan
- Validation Accuracy Bicycle: nan
- Validation Accuracy Lake: nan
- Validation Accuracy Dishwasher: nan
- Validation Accuracy Screen: nan
- Validation Accuracy Blanket: nan
- Validation Accuracy Sculpture: nan
- Validation Accuracy Hood: nan
- Validation Accuracy Sconce: nan
- Validation Accuracy Vase: nan
- Validation Accuracy Traffic light: nan
- Validation Accuracy Tray: nan
- Validation Accuracy Ashcan: nan
- Validation Accuracy Fan: nan
- Validation Accuracy Pier: nan
- Validation Accuracy Crt screen: nan
- Validation Accuracy Plate: nan
- Validation Accuracy Monitor: nan
- Validation Accuracy Bulletin board: nan
- Validation Accuracy Shower: nan
- Validation Accuracy Radiator: nan
- Validation Accuracy Glass: nan
- Validation Accuracy Clock: nan
- Validation Accuracy Flag: nan
- Validation Iou Wall: 0.0423
- Validation Iou Building: 0.2014
- Validation Iou Sky: 0.0801
- Validation Iou Floor: 0.6618
- Validation Iou Tree: 0.0735
- Validation Iou Ceiling: 0.2882
- Validation Iou Road: 0.1721
- Validation Iou Bed : 0.1926
- Validation Iou Windowpane: 0.0337
- Validation Iou Grass: 0.0128
- Validation Iou Cabinet: 0.0439
- Validation Iou Sidewalk: 0.0
- Validation Iou Person: 0.0
- Validation Iou Earth: 0.0
- Validation Iou Door: 0.0
- Validation Iou Table: 0.0
- Validation Iou Mountain: 0.0
- Validation Iou Plant: 0.1782
- Validation Iou Curtain: 0.0
- Validation Iou Chair: 0.0
- Validation Iou Car: nan
- Validation Iou Water: 0.0
- Validation Iou Painting: 0.0
- Validation Iou Sofa: nan
- Validation Iou Shelf: nan
- Validation Iou House: nan
- Validation Iou Sea: nan
- Validation Iou Mirror: nan
- Validation Iou Rug: 0.0
- Validation Iou Field: 0.0
- Validation Iou Armchair: nan
- Validation Iou Seat: nan
- Validation Iou Fence: 0.0
- Validation Iou Desk: 0.0
- Validation Iou Rock: 0.0
- Validation Iou Wardrobe: 0.0
- Validation Iou Lamp: nan
- Validation Iou Bathtub: nan
- Validation Iou Railing: 0.0
- Validation Iou Cushion: nan
- Validation Iou Base: nan
- Validation Iou Box: nan
- Validation Iou Column: nan
- Validation Iou Signboard: nan
- Validation Iou Chest of drawers: 0.0
- Validation Iou Counter: 0.0
- Validation Iou Sand: nan
- Validation Iou Sink: nan
- Validation Iou Skyscraper: 0.0
- Validation Iou Fireplace: nan
- Validation Iou Refrigerator: nan
- Validation Iou Grandstand: nan
- Validation Iou Path: nan
- Validation Iou Stairs: 0.0
- Validation Iou Runway: 0.0
- Validation Iou Case: nan
- Validation Iou Pool table: nan
- Validation Iou Pillow: nan
- Validation Iou Screen door: nan
- Validation Iou Stairway: nan
- Validation Iou River: nan
- Validation Iou Bridge: nan
- Validation Iou Bookcase: nan
- Validation Iou Blind: nan
- Validation Iou Coffee table: nan
- Validation Iou Toilet: nan
- Validation Iou Flower: nan
- Validation Iou Book: 0.0
- Validation Iou Hill: nan
- Validation Iou Bench: 0.0
- Validation Iou Countertop: nan
- Validation Iou Stove: nan
- Validation Iou Palm: nan
- Validation Iou Kitchen island: nan
- Validation Iou Computer: nan
- Validation Iou Swivel chair: 0.0
- Validation Iou Boat: 0.0
- Validation Iou Bar: nan
- Validation Iou Arcade machine: nan
- Validation Iou Hovel: nan
- Validation Iou Bus: nan
- Validation Iou Towel: nan
- Validation Iou Light: 0.0
- Validation Iou Truck: 0.0
- Validation Iou Tower: 0.0
- Validation Iou Chandelier: nan
- Validation Iou Awning: nan
- Validation Iou Streetlight: nan
- Validation Iou Booth: nan
- Validation Iou Television receiver: nan
- Validation Iou Airplane: nan
- Validation Iou Dirt track: nan
- Validation Iou Apparel: nan
- Validation Iou Pole: nan
- Validation Iou Land: nan
- Validation Iou Bannister: nan
- Validation Iou Escalator: nan
- Validation Iou Ottoman: nan
- Validation Iou Bottle: nan
- Validation Iou Buffet: nan
- Validation Iou Poster: nan
- Validation Iou Stage: nan
- Validation Iou Van: nan
- Validation Iou Ship: nan
- Validation Iou Fountain: nan
- Validation Iou Conveyer belt: nan
- Validation Iou Canopy: nan
- Validation Iou Washer: nan
- Validation Iou Plaything: nan
- Validation Iou Swimming pool: nan
- Validation Iou Stool: nan
- Validation Iou Barrel: nan
- Validation Iou Basket: nan
- Validation Iou Waterfall: nan
- Validation Iou Tent: nan
- Validation Iou Bag: nan
- Validation Iou Minibike: nan
- Validation Iou Cradle: nan
- Validation Iou Oven: nan
- Validation Iou Ball: nan
- Validation Iou Food: nan
- Validation Iou Step: nan
- Validation Iou Tank: nan
- Validation Iou Trade name: nan
- Validation Iou Microwave: nan
- Validation Iou Pot: nan
- Validation Iou Animal: nan
- Validation Iou Bicycle: nan
- Validation Iou Lake: nan
- Validation Iou Dishwasher: nan
- Validation Iou Screen: nan
- Validation Iou Blanket: nan
- Validation Iou Sculpture: nan
- Validation Iou Hood: nan
- Validation Iou Sconce: nan
- Validation Iou Vase: nan
- Validation Iou Traffic light: nan
- Validation Iou Tray: nan
- Validation Iou Ashcan: nan
- Validation Iou Fan: nan
- Validation Iou Pier: nan
- Validation Iou Crt screen: nan
- Validation Iou Plate: nan
- Validation Iou Monitor: nan
- Validation Iou Bulletin board: nan
- Validation Iou Shower: nan
- Validation Iou Radiator: nan
- Validation Iou Glass: nan
- Validation Iou Clock: nan
- Validation Iou Flag: nan
- Epoch: 49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 6e-05, 'decay_steps': 2000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Validation Mean Iou | Validation Mean Accuracy | Validation Overall Accuracy | Validation Accuracy Wall | Validation Accuracy Building | Validation Accuracy Sky | Validation Accuracy Floor | Validation Accuracy Tree | Validation Accuracy Ceiling | Validation Accuracy Road | Validation Accuracy Bed | Validation Accuracy Windowpane | Validation Accuracy Grass | Validation Accuracy Cabinet | Validation Accuracy Sidewalk | Validation Accuracy Person | Validation Accuracy Earth | Validation Accuracy Door | Validation Accuracy Table | Validation Accuracy Mountain | Validation Accuracy Plant | Validation Accuracy Curtain | Validation Accuracy Chair | Validation Accuracy Car | Validation Accuracy Water | Validation Accuracy Painting | Validation Accuracy Sofa | Validation Accuracy Shelf | Validation Accuracy House | Validation Accuracy Sea | Validation Accuracy Mirror | Validation Accuracy Rug | Validation Accuracy Field | Validation Accuracy Armchair | Validation Accuracy Seat | Validation Accuracy Fence | Validation Accuracy Desk | Validation Accuracy Rock | Validation Accuracy Wardrobe | Validation Accuracy Lamp | Validation Accuracy Bathtub | Validation Accuracy Railing | Validation Accuracy Cushion | Validation Accuracy Base | Validation Accuracy Box | Validation Accuracy Column | Validation Accuracy Signboard | Validation Accuracy Chest of drawers | Validation Accuracy Counter | Validation Accuracy Sand | Validation Accuracy Sink | Validation Accuracy Skyscraper | Validation Accuracy Fireplace | Validation Accuracy Refrigerator | Validation Accuracy Grandstand | Validation Accuracy Path | Validation Accuracy Stairs | Validation Accuracy Runway | Validation Accuracy Case | Validation Accuracy Pool table | Validation Accuracy Pillow | Validation Accuracy Screen door | Validation Accuracy Stairway | Validation Accuracy River | Validation Accuracy Bridge | Validation Accuracy Bookcase | Validation Accuracy Blind | Validation Accuracy Coffee table | Validation Accuracy Toilet | Validation Accuracy Flower | Validation Accuracy Book | Validation Accuracy Hill | Validation Accuracy Bench | Validation Accuracy Countertop | Validation Accuracy Stove | Validation Accuracy Palm | Validation Accuracy Kitchen island | Validation Accuracy Computer | Validation Accuracy Swivel chair | Validation Accuracy Boat | Validation Accuracy Bar | Validation Accuracy Arcade machine | Validation Accuracy Hovel | Validation Accuracy Bus | Validation Accuracy Towel | Validation Accuracy Light | Validation Accuracy Truck | Validation Accuracy Tower | Validation Accuracy Chandelier | Validation Accuracy Awning | Validation Accuracy Streetlight | Validation Accuracy Booth | Validation Accuracy Television receiver | Validation Accuracy Airplane | Validation Accuracy Dirt track | Validation Accuracy Apparel | Validation Accuracy Pole | Validation Accuracy Land | Validation Accuracy Bannister | Validation Accuracy Escalator | Validation Accuracy Ottoman | Validation Accuracy Bottle | Validation Accuracy Buffet | Validation Accuracy Poster | Validation Accuracy Stage | Validation Accuracy Van | Validation Accuracy Ship | Validation Accuracy Fountain | Validation Accuracy Conveyer belt | Validation Accuracy Canopy | Validation Accuracy Washer | Validation Accuracy Plaything | Validation Accuracy Swimming pool | Validation Accuracy Stool | Validation Accuracy Barrel | Validation Accuracy Basket | Validation Accuracy Waterfall | Validation Accuracy Tent | Validation Accuracy Bag | Validation Accuracy Minibike | Validation Accuracy Cradle | Validation Accuracy Oven | Validation Accuracy Ball | Validation Accuracy Food | Validation Accuracy Step | Validation Accuracy Tank | Validation Accuracy Trade name | Validation Accuracy Microwave | Validation Accuracy Pot | Validation Accuracy Animal | Validation Accuracy Bicycle | Validation Accuracy Lake | Validation Accuracy Dishwasher | Validation Accuracy Screen | Validation Accuracy Blanket | Validation Accuracy Sculpture | Validation Accuracy Hood | Validation Accuracy Sconce | Validation Accuracy Vase | Validation Accuracy Traffic light | Validation Accuracy Tray | Validation Accuracy Ashcan | Validation Accuracy Fan | Validation Accuracy Pier | Validation Accuracy Crt screen | Validation Accuracy Plate | Validation Accuracy Monitor | Validation Accuracy Bulletin board | Validation Accuracy Shower | Validation Accuracy Radiator | Validation Accuracy Glass | Validation Accuracy Clock | Validation Accuracy Flag | Validation Iou Wall | Validation Iou Building | Validation Iou Sky | Validation Iou Floor | Validation Iou Tree | Validation Iou Ceiling | Validation Iou Road | Validation Iou Bed | Validation Iou Windowpane | Validation Iou Grass | Validation Iou Cabinet | Validation Iou Sidewalk | Validation Iou Person | Validation Iou Earth | Validation Iou Door | Validation Iou Table | Validation Iou Mountain | Validation Iou Plant | Validation Iou Curtain | Validation Iou Chair | Validation Iou Car | Validation Iou Water | Validation Iou Painting | Validation Iou Sofa | Validation Iou Shelf | Validation Iou House | Validation Iou Sea | Validation Iou Mirror | Validation Iou Rug | Validation Iou Field | Validation Iou Armchair | Validation Iou Seat | Validation Iou Fence | Validation Iou Desk | Validation Iou Rock | Validation Iou Wardrobe | Validation Iou Lamp | Validation Iou Bathtub | Validation Iou Railing | Validation Iou Cushion | Validation Iou Base | Validation Iou Box | Validation Iou Column | Validation Iou Signboard | Validation Iou Chest of drawers | Validation Iou Counter | Validation Iou Sand | Validation Iou Sink | Validation Iou Skyscraper | Validation Iou Fireplace | Validation Iou Refrigerator | Validation Iou Grandstand | Validation Iou Path | Validation Iou Stairs | Validation Iou Runway | Validation Iou Case | Validation Iou Pool table | Validation Iou Pillow | Validation Iou Screen door | Validation Iou Stairway | Validation Iou River | Validation Iou Bridge | Validation Iou Bookcase | Validation Iou Blind | Validation Iou Coffee table | Validation Iou Toilet | Validation Iou Flower | Validation Iou Book | Validation Iou Hill | Validation Iou Bench | Validation Iou Countertop | Validation Iou Stove | Validation Iou Palm | Validation Iou Kitchen island | Validation Iou Computer | Validation Iou Swivel chair | Validation Iou Boat | Validation Iou Bar | Validation Iou Arcade machine | Validation Iou Hovel | Validation Iou Bus | Validation Iou Towel | Validation Iou Light | Validation Iou Truck | Validation Iou Tower | Validation Iou Chandelier | Validation Iou Awning | Validation Iou Streetlight | Validation Iou Booth | Validation Iou Television receiver | Validation Iou Airplane | Validation Iou Dirt track | Validation Iou Apparel | Validation Iou Pole | Validation Iou Land | Validation Iou Bannister | Validation Iou Escalator | Validation Iou Ottoman | Validation Iou Bottle | Validation Iou Buffet | Validation Iou Poster | Validation Iou Stage | Validation Iou Van | Validation Iou Ship | Validation Iou Fountain | Validation Iou Conveyer belt | Validation Iou Canopy | Validation Iou Washer | Validation Iou Plaything | Validation Iou Swimming pool | Validation Iou Stool | Validation Iou Barrel | Validation Iou Basket | Validation Iou Waterfall | Validation Iou Tent | Validation Iou Bag | Validation Iou Minibike | Validation Iou Cradle | Validation Iou Oven | Validation Iou Ball | Validation Iou Food | Validation Iou Step | Validation Iou Tank | Validation Iou Trade name | Validation Iou Microwave | Validation Iou Pot | Validation Iou Animal | Validation Iou Bicycle | Validation Iou Lake | Validation Iou Dishwasher | Validation Iou Screen | Validation Iou Blanket | Validation Iou Sculpture | Validation Iou Hood | Validation Iou Sconce | Validation Iou Vase | Validation Iou Traffic light | Validation Iou Tray | Validation Iou Ashcan | Validation Iou Fan | Validation Iou Pier | Validation Iou Crt screen | Validation Iou Plate | Validation Iou Monitor | Validation Iou Bulletin board | Validation Iou Shower | Validation Iou Radiator | Validation Iou Glass | Validation Iou Clock | Validation Iou Flag | Epoch |
|:----------:|:---------------:|:-------------------:|:------------------------:|:---------------------------:|:------------------------:|:----------------------------:|:-----------------------:|:-------------------------:|:------------------------:|:---------------------------:|:------------------------:|:------------------------:|:------------------------------:|:-------------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-------------------------:|:------------------------:|:-------------------------:|:----------------------------:|:-------------------------:|:---------------------------:|:-------------------------:|:-----------------------:|:-------------------------:|:----------------------------:|:------------------------:|:-------------------------:|:-------------------------:|:-----------------------:|:--------------------------:|:-----------------------:|:-------------------------:|:----------------------------:|:------------------------:|:-------------------------:|:------------------------:|:------------------------:|:----------------------------:|:------------------------:|:---------------------------:|:---------------------------:|:---------------------------:|:------------------------:|:-----------------------:|:--------------------------:|:-----------------------------:|:------------------------------------:|:---------------------------:|:------------------------:|:------------------------:|:------------------------------:|:-----------------------------:|:--------------------------------:|:------------------------------:|:------------------------:|:--------------------------:|:--------------------------:|:------------------------:|:------------------------------:|:--------------------------:|:-------------------------------:|:----------------------------:|:-------------------------:|:--------------------------:|:----------------------------:|:-------------------------:|:--------------------------------:|:--------------------------:|:--------------------------:|:------------------------:|:------------------------:|:-------------------------:|:------------------------------:|:-------------------------:|:------------------------:|:----------------------------------:|:----------------------------:|:--------------------------------:|:------------------------:|:-----------------------:|:----------------------------------:|:-------------------------:|:-----------------------:|:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|:------------------------------:|:--------------------------:|:-------------------------------:|:-------------------------:|:---------------------------------------:|:----------------------------:|:------------------------------:|:---------------------------:|:------------------------:|:------------------------:|:-----------------------------:|:-----------------------------:|:---------------------------:|:--------------------------:|:--------------------------:|:--------------------------:|:-------------------------:|:-----------------------:|:------------------------:|:----------------------------:|:---------------------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:---------------------------------:|:-------------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:------------------------:|:-----------------------:|:----------------------------:|:--------------------------:|:------------------------:|:------------------------:|:------------------------:|:------------------------:|:------------------------:|:------------------------------:|:-----------------------------:|:-----------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------------:|:--------------------------:|:---------------------------:|:-----------------------------:|:------------------------:|:--------------------------:|:------------------------:|:---------------------------------:|:------------------------:|:--------------------------:|:-----------------------:|:------------------------:|:------------------------------:|:-------------------------:|:---------------------------:|:----------------------------------:|:--------------------------:|:----------------------------:|:-------------------------:|:-------------------------:|:------------------------:|:-------------------:|:-----------------------:|:------------------:|:--------------------:|:-------------------:|:----------------------:|:-------------------:|:-------------------:|:-------------------------:|:--------------------:|:----------------------:|:-----------------------:|:---------------------:|:--------------------:|:-------------------:|:--------------------:|:-----------------------:|:--------------------:|:----------------------:|:--------------------:|:------------------:|:--------------------:|:-----------------------:|:-------------------:|:--------------------:|:--------------------:|:------------------:|:---------------------:|:------------------:|:--------------------:|:-----------------------:|:-------------------:|:--------------------:|:-------------------:|:-------------------:|:-----------------------:|:-------------------:|:----------------------:|:----------------------:|:----------------------:|:-------------------:|:------------------:|:---------------------:|:------------------------:|:-------------------------------:|:----------------------:|:-------------------:|:-------------------:|:-------------------------:|:------------------------:|:---------------------------:|:-------------------------:|:-------------------:|:---------------------:|:---------------------:|:-------------------:|:-------------------------:|:---------------------:|:--------------------------:|:-----------------------:|:--------------------:|:---------------------:|:-----------------------:|:--------------------:|:---------------------------:|:---------------------:|:---------------------:|:-------------------:|:-------------------:|:--------------------:|:-------------------------:|:--------------------:|:-------------------:|:-----------------------------:|:-----------------------:|:---------------------------:|:-------------------:|:------------------:|:-----------------------------:|:--------------------:|:------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:-------------------------:|:---------------------:|:--------------------------:|:--------------------:|:----------------------------------:|:-----------------------:|:-------------------------:|:----------------------:|:-------------------:|:-------------------:|:------------------------:|:------------------------:|:----------------------:|:---------------------:|:---------------------:|:---------------------:|:--------------------:|:------------------:|:-------------------:|:-----------------------:|:----------------------------:|:---------------------:|:---------------------:|:------------------------:|:----------------------------:|:--------------------:|:---------------------:|:---------------------:|:------------------------:|:-------------------:|:------------------:|:-----------------------:|:---------------------:|:-------------------:|:-------------------:|:-------------------:|:-------------------:|:-------------------:|:-------------------------:|:------------------------:|:------------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------------:|:---------------------:|:----------------------:|:------------------------:|:-------------------:|:---------------------:|:-------------------:|:----------------------------:|:-------------------:|:---------------------:|:------------------:|:-------------------:|:-------------------------:|:--------------------:|:----------------------:|:-----------------------------:|:---------------------:|:-----------------------:|:--------------------:|:--------------------:|:-------------------:|:-----:|
| 5.1475 | 5.2493 | 0.0005 | 0.0034 | 0.0065 | 0.0001 | 0.0203 | 0.0 | 0.0109 | 0.0 | 0.0094 | 0.0 | 0.0 | 0.0 | 0.0042 | 0.0016 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0005 | 0.0050 | 0.0 | nan | 0.0 | 0.0806 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0001 | 0.0108 | 0.0 | 0.0108 | 0.0 | 0.0068 | 0.0 | 0.0 | 0.0 | 0.0042 | 0.0016 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0004 | 0.0033 | 0.0 | 0.0 | 0.0 | 0.0233 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0 |
| 4.6637 | 4.9104 | 0.0066 | 0.0285 | 0.1392 | 0.0007 | 0.0337 | 0.0 | 0.6858 | 0.0 | 0.1381 | 0.0 | 0.0642 | 0.0 | 0.0024 | 0.0087 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0029 | 0.0015 | 0.0 | nan | 0.0 | 0.1108 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0040 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0594 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0006 | 0.0148 | 0.0 | 0.5076 | 0.0 | 0.0878 | 0.0 | 0.0418 | 0.0 | 0.0024 | 0.0083 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0019 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0541 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0031 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | nan | 0.0 | 0.0515 | nan | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 1 |
| 4.3697 | 4.5399 | 0.0067 | 0.0296 | 0.1603 | 0.0037 | 0.1714 | 0.0 | 0.7533 | 0.0 | 0.0054 | 0.0 | 0.0133 | 0.0 | 0.0003 | 0.0610 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0174 | 0.0240 | 0.0 | nan | 0.0 | 0.1044 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0009 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0034 | 0.0519 | 0.0 | 0.5888 | 0.0 | 0.0045 | 0.0 | 0.0094 | 0.0 | 0.0003 | 0.0574 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0119 | 0.0184 | 0.0 | 0.0 | 0.0 | 0.0410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0007 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | nan | 0.0 | nan | 0.0 | nan | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 2 |
| 4.2365 | 4.2736 | 0.0081 | 0.0373 | 0.1875 | 0.0060 | 0.2798 | 0.0 | 0.8488 | 0.0 | 0.0376 | 0.0 | 0.1624 | 0.0 | 0.0 | 0.0262 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0072 | 0.0050 | 0.0 | nan | 0.0 | 0.0806 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0051 | 0.0683 | 0.0 | 0.5815 | 0.0 | 0.0284 | 0.0 | 0.0848 | 0.0 | 0.0 | 0.0232 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0060 | 0.0044 | 0.0 | 0.0 | 0.0 | 0.0428 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | 3 |
| 3.9914 | 4.1480 | 0.0100 | 0.0441 | 0.2024 | 0.0122 | 0.2647 | 0.0038 | 0.8416 | 0.0 | 0.2329 | 0.0075 | 0.1753 | 0.0 | 0.0 | 0.1780 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0017 | 0.0029 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0003 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0005 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0110 | 0.0747 | 0.0032 | 0.6051 | 0.0 | 0.1095 | 0.0064 | 0.0687 | 0.0 | 0.0 | 0.0959 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0015 | 0.0027 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0003 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | nan | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0004 | nan | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 4 |
| 3.9321 | 3.9477 | 0.0090 | 0.0463 | 0.2166 | 0.0201 | 0.3124 | 0.1042 | 0.8740 | 0.0069 | 0.2894 | 0.0017 | 0.0023 | 0.0 | 0.0005 | 0.1852 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0014 | 0.0013 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0013 | 0.0035 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0163 | 0.0980 | 0.0375 | 0.5355 | 0.0023 | 0.1342 | 0.0014 | 0.0015 | 0.0 | 0.0004 | 0.1264 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0013 | 0.0012 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0011 | 0.0014 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | 0.0 | nan | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 5 |
| 3.6604 | 3.7180 | 0.0075 | 0.0402 | 0.2187 | 0.0205 | 0.2240 | 0.0579 | 0.9852 | 0.0091 | 0.1951 | 0.0008 | 0.0 | 0.0 | 0.0000 | 0.0451 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0265 | 0.0002 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0020 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0146 | 0.0702 | 0.0274 | 0.3835 | 0.0022 | 0.1142 | 0.0006 | 0.0 | 0.0 | 0.0000 | 0.0415 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0238 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0015 | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 6 |
| 3.5287 | 3.7016 | 0.0103 | 0.0444 | 0.2248 | 0.0386 | 0.3303 | 0.0423 | 0.9620 | 0.0059 | 0.1443 | 0.0003 | 0.0057 | 0.0010 | 0.0 | 0.1843 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0022 | 0.0015 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0134 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0243 | 0.0969 | 0.0189 | 0.4577 | 0.0018 | 0.0887 | 0.0003 | 0.0041 | 0.0001 | 0.0 | 0.1431 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0020 | 0.0015 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0070 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | nan | 0.0 | nan | 0.0 | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | 0.0 | 0.0 | nan | 0.0 | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 7 |
| 3.4427 | 3.6365 | 0.0092 | 0.0374 | 0.2088 | 0.0461 | 0.2918 | 0.0143 | 0.9497 | 0.0031 | 0.0502 | 0.0 | 0.0 | 0.0279 | 0.0 | 0.0432 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0170 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0135 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0293 | 0.0888 | 0.0075 | 0.4462 | 0.0009 | 0.0350 | 0.0 | 0.0 | 0.0023 | 0.0 | 0.0392 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0135 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0086 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | 0.0 | nan | 0.0 | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 8 |
| 3.3384 | 3.5392 | 0.0122 | 0.0435 | 0.2146 | 0.0395 | 0.3470 | 0.0906 | 0.9087 | 0.0520 | 0.0817 | 0.0213 | 0.0251 | 0.0001 | 0.0325 | 0.0889 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0107 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0248 | 0.1164 | 0.0315 | 0.4441 | 0.0115 | 0.0614 | 0.0185 | 0.0158 | 0.0000 | 0.0275 | 0.0720 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0096 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | nan | 0.0 | nan | 0.0 | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 9 |
| 3.2516 | 3.5384 | 0.0149 | 0.0442 | 0.2151 | 0.0630 | 0.4115 | 0.0480 | 0.8946 | 0.0849 | 0.0538 | 0.0158 | 0.0098 | 0.0 | 0.0692 | 0.0698 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0014 | 0.0000 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0354 | 0.1364 | 0.0274 | 0.4884 | 0.0111 | 0.0428 | 0.0126 | 0.0074 | 0.0 | 0.0495 | 0.0524 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0012 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | 0.0 | nan | 0.0 | nan | 0.0 | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 10 |
| 3.1296 | 3.4255 | 0.0145 | 0.0450 | 0.2214 | 0.0666 | 0.2095 | 0.0178 | 0.9762 | 0.0700 | 0.0985 | 0.0155 | 0.0798 | 0.0 | 0.0614 | 0.1428 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0163 | 0.0011 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0370 | 0.0826 | 0.0087 | 0.4031 | 0.0097 | 0.0822 | 0.0135 | 0.0638 | 0.0 | 0.0471 | 0.1060 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0132 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | 0.0 | nan | 0.0 | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 11 |
| 2.9589 | 3.4019 | 0.0183 | 0.0507 | 0.2319 | 0.0693 | 0.2421 | 0.0402 | 0.9714 | 0.1850 | 0.2591 | 0.0050 | 0.0012 | 0.0009 | 0.0121 | 0.1883 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0029 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0313 | 0.1028 | 0.0177 | 0.4146 | 0.0259 | 0.1892 | 0.0046 | 0.0010 | 0.0002 | 0.0106 | 0.1343 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0025 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 12 |
| 3.0189 | 3.3557 | 0.0177 | 0.0507 | 0.2352 | 0.0914 | 0.5665 | 0.0507 | 0.9380 | 0.2546 | 0.0198 | 0.0349 | 0.0 | 0.0 | 0.0 | 0.0213 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0015 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0367 | 0.1694 | 0.0224 | 0.5123 | 0.0234 | 0.0188 | 0.0295 | 0.0 | 0.0 | 0.0 | 0.0203 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0014 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 13 |
| 2.8394 | 3.3524 | 0.0200 | 0.0514 | 0.2386 | 0.1386 | 0.4823 | 0.0048 | 0.9525 | 0.2138 | 0.0713 | 0.0295 | 0.0013 | 0.0128 | 0.0031 | 0.0926 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0490 | 0.1827 | 0.0028 | 0.5781 | 0.0178 | 0.0560 | 0.0217 | 0.0013 | 0.0030 | 0.0023 | 0.0656 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 14 |
| 2.8404 | 3.1828 | 0.0249 | 0.0565 | 0.2513 | 0.1545 | 0.5864 | 0.0118 | 0.9461 | 0.1688 | 0.0642 | 0.1524 | 0.0198 | 0.0373 | 0.0032 | 0.0488 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0118 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0595 | 0.1946 | 0.0064 | 0.6387 | 0.0152 | 0.0594 | 0.1194 | 0.0184 | 0.0061 | 0.0022 | 0.0382 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0103 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 15 |
| 2.7250 | 3.3893 | 0.0243 | 0.0565 | 0.2226 | 0.0777 | 0.4433 | 0.3269 | 0.8286 | 0.2780 | 0.0009 | 0.0551 | 0.0673 | 0.0 | 0.0 | 0.1144 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0097 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0332 | 0.1458 | 0.0620 | 0.6064 | 0.0247 | 0.0009 | 0.0474 | 0.0662 | 0.0 | 0.0 | 0.0963 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0096 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 16 |
| 2.6969 | 3.3879 | 0.0255 | 0.0600 | 0.2329 | 0.0872 | 0.6250 | 0.1010 | 0.8221 | 0.4317 | 0.0718 | 0.0718 | 0.0016 | 0.0339 | 0.0 | 0.0908 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0041 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0400 | 0.1948 | 0.0363 | 0.5790 | 0.0283 | 0.0656 | 0.0536 | 0.0015 | 0.0088 | 0.0 | 0.0825 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0041 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 17 |
| 2.5650 | 3.3511 | 0.0239 | 0.0563 | 0.2346 | 0.1699 | 0.3705 | 0.0843 | 0.8728 | 0.2667 | 0.0865 | 0.0203 | 0.0100 | 0.0435 | 0.0 | 0.2439 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0254 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0483 | 0.1580 | 0.0385 | 0.5064 | 0.0292 | 0.0716 | 0.0153 | 0.0094 | 0.0148 | 0.0 | 0.1613 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0228 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 18 |
| 2.6447 | 3.3085 | 0.0231 | 0.0535 | 0.2142 | 0.1567 | 0.2709 | 0.1775 | 0.8163 | 0.2098 | 0.0359 | 0.0216 | 0.0615 | 0.1147 | 0.0011 | 0.1947 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0248 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0463 | 0.1259 | 0.0468 | 0.5199 | 0.0211 | 0.0289 | 0.0181 | 0.0512 | 0.0497 | 0.0009 | 0.1063 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0237 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 19 |
| 2.7269 | 3.2485 | 0.0265 | 0.0608 | 0.2375 | 0.1562 | 0.4407 | 0.2352 | 0.8686 | 0.3767 | 0.0118 | 0.0150 | 0.0600 | 0.0175 | 0.0 | 0.1783 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0125 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0459 | 0.1811 | 0.0637 | 0.5941 | 0.0334 | 0.0113 | 0.0127 | 0.0506 | 0.0093 | 0.0 | 0.1270 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0123 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 20 |
| 2.3622 | 3.2225 | 0.0267 | 0.0603 | 0.2562 | 0.1619 | 0.6268 | 0.1314 | 0.9319 | 0.2747 | 0.0066 | 0.0055 | 0.1061 | 0.0000 | 0.0 | 0.0883 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0186 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0504 | 0.1880 | 0.0653 | 0.5869 | 0.0291 | 0.0064 | 0.0050 | 0.0936 | 0.0000 | 0.0 | 0.0771 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0177 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 21 |
| 2.4161 | 3.1927 | 0.0329 | 0.0718 | 0.2849 | 0.1985 | 0.8057 | 0.0933 | 0.9110 | 0.3610 | 0.0737 | 0.0623 | 0.0713 | 0.0 | 0.0 | 0.1793 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0447 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0616 | 0.2512 | 0.0616 | 0.7012 | 0.0359 | 0.0597 | 0.0488 | 0.0616 | 0.0 | 0.0 | 0.1237 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0413 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 22 |
| 2.3506 | 3.1751 | 0.0294 | 0.0662 | 0.2655 | 0.1655 | 0.6211 | 0.1519 | 0.9337 | 0.3733 | 0.0623 | 0.0010 | 0.1146 | 0.0117 | 0.0 | 0.1181 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0283 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0536 | 0.2035 | 0.0638 | 0.6220 | 0.0417 | 0.0549 | 0.0009 | 0.0874 | 0.0051 | 0.0 | 0.0733 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0268 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 23 |
| 2.2583 | 3.0226 | 0.0304 | 0.0729 | 0.2703 | 0.1623 | 0.5675 | 0.0827 | 0.9601 | 0.6059 | 0.0848 | 0.1014 | 0.1268 | 0.0321 | 0.0027 | 0.0904 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0283 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0509 | 0.2019 | 0.0564 | 0.6091 | 0.0681 | 0.0718 | 0.0898 | 0.1004 | 0.0095 | 0.0018 | 0.0515 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0271 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 24 |
| 2.3878 | 3.0445 | 0.0272 | 0.0665 | 0.2495 | 0.1134 | 0.2256 | 0.1343 | 0.9702 | 0.5784 | 0.0833 | 0.0369 | 0.1588 | 0.1796 | 0.0 | 0.0090 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.1047 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0387 | 0.1146 | 0.0439 | 0.5496 | 0.0579 | 0.0749 | 0.0350 | 0.1110 | 0.0452 | 0.0 | 0.0068 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0902 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 25 |
| 2.2520 | 3.2388 | 0.0262 | 0.0544 | 0.2344 | 0.1484 | 0.2503 | 0.1759 | 0.9482 | 0.3381 | 0.0505 | 0.0122 | 0.1399 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0589 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0472 | 0.1064 | 0.0386 | 0.6188 | 0.0377 | 0.0471 | 0.0116 | 0.1099 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0559 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 26 |
| 2.2559 | 3.2414 | 0.0293 | 0.0702 | 0.2505 | 0.1411 | 0.2844 | 0.1513 | 0.9541 | 0.6289 | 0.1263 | 0.0204 | 0.2395 | 0.1079 | 0.0 | 0.0321 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0509 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0421 | 0.1409 | 0.0435 | 0.5995 | 0.0809 | 0.1037 | 0.0199 | 0.1574 | 0.0377 | 0.0 | 0.0165 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0490 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 27 |
| 2.1873 | 3.1585 | 0.0265 | 0.0578 | 0.2474 | 0.1105 | 0.2431 | 0.1938 | 0.9527 | 0.3975 | 0.0822 | 0.0089 | 0.0687 | 0.0590 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.1379 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0370 | 0.1094 | 0.0467 | 0.5726 | 0.0522 | 0.0722 | 0.0088 | 0.0541 | 0.0182 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1160 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 28 |
| 2.2229 | 3.2781 | 0.0212 | 0.0511 | 0.2238 | 0.1344 | 0.1301 | 0.3345 | 0.9517 | 0.2707 | 0.0337 | 0.0011 | 0.0417 | 0.0451 | 0.0 | 0.0028 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0469 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0421 | 0.0615 | 0.0612 | 0.5587 | 0.0402 | 0.0317 | 0.0011 | 0.0312 | 0.0139 | 0.0 | 0.0024 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0457 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 29 |
| 2.0533 | 3.1221 | 0.0234 | 0.0513 | 0.2350 | 0.1156 | 0.0981 | 0.2648 | 0.9695 | 0.2477 | 0.0974 | 0.0007 | 0.0095 | 0.0688 | 0.0059 | 0.0042 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.1178 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0391 | 0.0534 | 0.0513 | 0.5578 | 0.0392 | 0.0874 | 0.0007 | 0.0063 | 0.0173 | 0.0043 | 0.0034 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0981 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 30 |
| 2.1138 | 3.0248 | 0.0346 | 0.0723 | 0.2666 | 0.1491 | 0.2392 | 0.1880 | 0.9419 | 0.5280 | 0.1621 | 0.0389 | 0.1921 | 0.1448 | 0.0188 | 0.0650 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.1517 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0449 | 0.1262 | 0.0508 | 0.6504 | 0.0921 | 0.1362 | 0.0377 | 0.0996 | 0.0353 | 0.0110 | 0.0348 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1334 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 31 |
| 2.0237 | 3.1016 | 0.0298 | 0.0697 | 0.2643 | 0.1189 | 0.3708 | 0.0876 | 0.9755 | 0.5466 | 0.1560 | 0.0129 | 0.1847 | 0.1496 | 0.0085 | 0.0039 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.1020 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0424 | 0.1745 | 0.0350 | 0.5661 | 0.0765 | 0.1132 | 0.0125 | 0.1016 | 0.0250 | 0.0049 | 0.0026 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0975 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 32 |
| 2.0452 | 3.0575 | 0.0395 | 0.0822 | 0.2928 | 0.1673 | 0.2678 | 0.1569 | 0.9497 | 0.5373 | 0.2377 | 0.0133 | 0.4283 | 0.1188 | 0.0094 | 0.1018 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.2195 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0469 | 0.1417 | 0.0460 | 0.6481 | 0.1476 | 0.1783 | 0.0133 | 0.1925 | 0.0336 | 0.0067 | 0.0599 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1856 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 33 |
| 2.0312 | 3.1034 | 0.0337 | 0.0734 | 0.2783 | 0.1170 | 0.2552 | 0.1545 | 0.9641 | 0.6504 | 0.3794 | 0.0204 | 0.0618 | 0.0526 | 0.0008 | 0.0426 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.1656 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0386 | 0.1305 | 0.0434 | 0.5712 | 0.1144 | 0.2689 | 0.0202 | 0.0338 | 0.0187 | 0.0005 | 0.0263 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1470 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 34 |
| 1.8983 | 3.0708 | 0.0337 | 0.0693 | 0.2750 | 0.1540 | 0.2237 | 0.1702 | 0.9630 | 0.5114 | 0.2737 | 0.0828 | 0.0892 | 0.0144 | 0.0183 | 0.0207 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.1815 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0437 | 0.1181 | 0.0532 | 0.5937 | 0.0962 | 0.1888 | 0.0804 | 0.0504 | 0.0053 | 0.0118 | 0.0147 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1589 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 35 |
| 1.8995 | 3.1896 | 0.0315 | 0.0697 | 0.2678 | 0.1361 | 0.2399 | 0.1938 | 0.9515 | 0.5490 | 0.2439 | 0.0061 | 0.1198 | 0.0800 | 0.0057 | 0.0362 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.1562 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0430 | 0.1214 | 0.0559 | 0.5873 | 0.0893 | 0.1867 | 0.0061 | 0.0718 | 0.0270 | 0.0036 | 0.0200 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1424 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 36 |
| 1.8726 | 3.1599 | 0.0296 | 0.0630 | 0.2604 | 0.1354 | 0.2214 | 0.1666 | 0.9696 | 0.3325 | 0.2382 | 0.0041 | 0.1731 | 0.0709 | 0.0028 | 0.0028 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.1384 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0428 | 0.1086 | 0.0456 | 0.5576 | 0.0554 | 0.1780 | 0.0041 | 0.0993 | 0.0218 | 0.0020 | 0.0021 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1260 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 37 |
| 1.8048 | 3.1669 | 0.0342 | 0.0718 | 0.2727 | 0.1570 | 0.3151 | 0.1046 | 0.9534 | 0.5954 | 0.2125 | 0.0580 | 0.1875 | 0.0376 | 0.0056 | 0.0255 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.1488 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0463 | 0.1461 | 0.0423 | 0.6217 | 0.0884 | 0.1620 | 0.0531 | 0.1119 | 0.0115 | 0.0032 | 0.0160 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1326 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 38 |
| 1.8245 | 3.1467 | 0.0343 | 0.0708 | 0.2698 | 0.0884 | 0.1149 | 0.3070 | 0.9643 | 0.3534 | 0.3452 | 0.0086 | 0.2870 | 0.0635 | 0.0293 | 0.0208 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.1774 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0339 | 0.0669 | 0.0627 | 0.5611 | 0.0633 | 0.2586 | 0.0086 | 0.1749 | 0.0167 | 0.0222 | 0.0147 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1569 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 39 |
| 1.7771 | 3.1239 | 0.0375 | 0.0747 | 0.2827 | 0.1460 | 0.1138 | 0.1532 | 0.9707 | 0.3409 | 0.3831 | 0.0042 | 0.4824 | 0.0286 | 0.0321 | 0.0471 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.2107 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0411 | 0.0702 | 0.0391 | 0.5875 | 0.0934 | 0.2324 | 0.0042 | 0.2149 | 0.0112 | 0.0231 | 0.0343 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1842 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 40 |
| 1.8377 | 3.1836 | 0.0376 | 0.0768 | 0.2888 | 0.1579 | 0.1909 | 0.1047 | 0.9639 | 0.4080 | 0.4131 | 0.0331 | 0.2941 | 0.0722 | 0.0363 | 0.1267 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.1959 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0462 | 0.1070 | 0.0420 | 0.6025 | 0.0902 | 0.2417 | 0.0319 | 0.1777 | 0.0275 | 0.0219 | 0.0566 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1699 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 41 |
| 1.7250 | 3.0627 | 0.0368 | 0.0755 | 0.2839 | 0.1194 | 0.1982 | 0.1689 | 0.9550 | 0.4521 | 0.4746 | 0.0106 | 0.2432 | 0.0545 | 0.0334 | 0.0463 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.1885 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0421 | 0.1019 | 0.0509 | 0.6011 | 0.0727 | 0.2764 | 0.0103 | 0.1615 | 0.0186 | 0.0202 | 0.0248 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1661 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 42 |
| 1.7921 | 3.0676 | 0.0353 | 0.0750 | 0.2816 | 0.1179 | 0.3930 | 0.1759 | 0.9633 | 0.4752 | 0.3845 | 0.0145 | 0.2135 | 0.0685 | 0.0003 | 0.0017 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.1183 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0420 | 0.1759 | 0.0649 | 0.5873 | 0.0616 | 0.2640 | 0.0137 | 0.1395 | 0.0172 | 0.0002 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1155 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 43 |
| 1.7035 | 3.0840 | 0.0345 | 0.0729 | 0.2840 | 0.1266 | 0.3252 | 0.1491 | 0.9690 | 0.4443 | 0.3330 | 0.0122 | 0.2387 | 0.0582 | 0.0041 | 0.0036 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.1797 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0487 | 0.1549 | 0.0579 | 0.5556 | 0.0625 | 0.2255 | 0.0117 | 0.1394 | 0.0119 | 0.0026 | 0.0028 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1749 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 44 |
| 1.7522 | 3.0969 | 0.0354 | 0.0751 | 0.2889 | 0.1294 | 0.3678 | 0.0844 | 0.9680 | 0.4243 | 0.3929 | 0.0195 | 0.2566 | 0.0468 | 0.0075 | 0.0711 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.1599 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0464 | 0.1575 | 0.0560 | 0.5802 | 0.0780 | 0.2396 | 0.0173 | 0.1518 | 0.0105 | 0.0041 | 0.0334 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1478 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 45 |
| 1.6936 | 3.0632 | 0.0362 | 0.0833 | 0.3058 | 0.0903 | 0.8071 | 0.0320 | 0.9814 | 0.6581 | 0.3170 | 0.1491 | 0.0291 | 0.1003 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0834 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0375 | 0.2963 | 0.0211 | 0.6643 | 0.0709 | 0.2182 | 0.1291 | 0.0208 | 0.0196 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0801 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 46 |
| 1.7398 | 3.0390 | 0.0411 | 0.0839 | 0.3050 | 0.0905 | 0.7540 | 0.0823 | 0.9713 | 0.3896 | 0.3694 | 0.1507 | 0.1017 | 0.2556 | 0.0067 | 0.0148 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0854 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0377 | 0.3108 | 0.0463 | 0.6999 | 0.0508 | 0.2419 | 0.1403 | 0.0706 | 0.0354 | 0.0035 | 0.0086 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0801 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 47 |
| 1.7391 | 3.0520 | 0.0451 | 0.0938 | 0.3208 | 0.1340 | 0.5897 | 0.0417 | 0.9525 | 0.5587 | 0.4922 | 0.1891 | 0.2135 | 0.1646 | 0.0278 | 0.1443 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.1507 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0498 | 0.2622 | 0.0307 | 0.7044 | 0.0947 | 0.2554 | 0.1678 | 0.1294 | 0.0417 | 0.0147 | 0.0585 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1297 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 48 |
| 1.7868 | 3.0539 | 0.0483 | 0.0908 | 0.3160 | 0.1166 | 0.4198 | 0.1940 | 0.9531 | 0.3920 | 0.4672 | 0.1731 | 0.3648 | 0.1441 | 0.0217 | 0.0857 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.2078 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0423 | 0.2014 | 0.0801 | 0.6618 | 0.0735 | 0.2882 | 0.1721 | 0.1926 | 0.0337 | 0.0128 | 0.0439 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1782 | 0.0 | 0.0 | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | 0.0 | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 49 |
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.9.2
- Datasets 2.8.0
- Tokenizers 0.13.2
|
AnonymousSub/SR_rule_based_twostagequadruplet_hier_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.30 +/- 0.21
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/SR_rule_based_twostagetriplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | 2023-01-19T18:41:10Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.77 +/- 23.77
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/SR_specter | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2023-01-19T18:44:19Z | ---
license: gpl-3.0
datasets:
- mble/nameToStdName
language:
- en
library_name: spacy
tags:
- code
- ner
- named entity recognition
- minecraft
- minecraft plugins
- product name
---
# nameToStdName for Minecraft plugins from SpigotMC and Bukkit
From Spigot/Bukkit plugin titles and description, extract plugin names.
Main repository: https://github.com/pluget/services
## License (SPDX)
GPL-3.0 for code
ODbL-1.0 for data/models
## Creators
Maciej Błędkowski - Founder, Lead Developer |
AnonymousSub/SciFive_pubmedqa_question_generation | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 7 | 2023-01-19T18:45:28Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="billray110/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
AnonymousSub/bert-base-uncased_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | ---
license: creativeml-openrail-m
---
merge recipe + tea model thanks to https://huggingface.co/andite |
AnonymousSub/bert_hier_diff_equal_wts_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2023-01-19T18:52:12Z | ---
library_name: xpmir
---
The TAS-Balanced model, adapted for experimaestro IR |
AnonymousSub/bert_triplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Core Dreambooth model trained by Eto-Demerzel with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
AnonymousSub/bert_triplet_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2023-01-19T18:59:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
model-index:
- name: emotion_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.7938071780436312
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3046
- Accuracy: 0.7938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 204 | 1.1915 | 0.7854 |
| No log | 2.0 | 408 | 1.1624 | 0.7889 |
| 0.0451 | 3.0 | 612 | 1.1865 | 0.7952 |
| 0.0451 | 4.0 | 816 | 1.2653 | 0.7945 |
| 0.0154 | 5.0 | 1020 | 1.3046 | 0.7938 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
AnonymousSub/cline-emanuals-techqa | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.93 +/- 15.47
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/cline-papers-biomed-0.618 | [
"pytorch",
"roberta",
"transformers"
]
| null | {
"architectures": [
"LecbertForPreTraining"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-19jan-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-19jan-9
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6105
- Rouge1: 7.7
- Rouge2: 0.1667
- Rougel: 7.5759
- Rougelsum: 7.6113
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 16.5659 | 1.0 | 50 | 5.8214 | 2.2745 | 0.2338 | 2.2506 | 2.2815 |
| 10.2143 | 2.0 | 100 | 3.8680 | 4.4303 | 0.7671 | 4.4208 | 4.483 |
| 7.4492 | 3.0 | 150 | 3.2448 | 3.6533 | 0.6857 | 3.6861 | 3.6791 |
| 5.8239 | 4.0 | 200 | 3.0981 | 5.6287 | 0.8679 | 5.642 | 5.627 |
| 4.9377 | 5.0 | 250 | 3.0326 | 6.1068 | 1.1621 | 5.9651 | 6.0361 |
| 4.4824 | 6.0 | 300 | 2.9802 | 6.6496 | 1.3443 | 6.5332 | 6.5395 |
| 4.2193 | 7.0 | 350 | 2.9484 | 6.0845 | 1.2364 | 6.1077 | 6.0827 |
| 4.0234 | 8.0 | 400 | 2.9076 | 6.0958 | 1.3299 | 6.0806 | 6.0413 |
| 3.9046 | 9.0 | 450 | 2.8460 | 5.6462 | 1.1644 | 5.6397 | 5.6103 |
| 3.8087 | 10.0 | 500 | 2.8036 | 5.7538 | 1.1644 | 5.774 | 5.7442 |
| 3.6872 | 11.0 | 550 | 2.7727 | 6.5993 | 1.3311 | 6.5773 | 6.6049 |
| 3.6338 | 12.0 | 600 | 2.7285 | 6.0417 | 1.0778 | 6.047 | 6.089 |
| 3.574 | 13.0 | 650 | 2.7132 | 8.7833 | 0.25 | 8.803 | 8.6985 |
| 3.548 | 14.0 | 700 | 2.7023 | 8.9393 | 0.75 | 8.9619 | 8.8679 |
| 3.49 | 15.0 | 750 | 2.6943 | 9.1778 | 1.0 | 9.1537 | 9.0722 |
| 3.4098 | 16.0 | 800 | 2.6856 | 8.9167 | 0.75 | 8.9477 | 8.8597 |
| 3.3776 | 17.0 | 850 | 2.6827 | 8.3503 | 0.1667 | 8.3179 | 8.2614 |
| 3.3493 | 18.0 | 900 | 2.6899 | 8.6983 | 0.4524 | 8.6503 | 8.602 |
| 3.3309 | 19.0 | 950 | 2.6833 | 8.2433 | 0.4524 | 8.1185 | 8.1429 |
| 3.2833 | 20.0 | 1000 | 2.6785 | 8.2194 | 0.4524 | 8.106 | 8.1227 |
| 3.2491 | 21.0 | 1050 | 2.6817 | 8.2194 | 0.4524 | 8.106 | 8.1227 |
| 3.22 | 22.0 | 1100 | 2.6697 | 8.3829 | 0.4524 | 8.2852 | 8.3167 |
| 3.2433 | 23.0 | 1150 | 2.6522 | 8.2194 | 0.4524 | 8.106 | 8.1227 |
| 3.1882 | 24.0 | 1200 | 2.6493 | 8.2194 | 0.4524 | 8.106 | 8.1227 |
| 3.1622 | 25.0 | 1250 | 2.6630 | 8.3593 | 0.4524 | 8.2859 | 8.3167 |
| 3.1396 | 26.0 | 1300 | 2.6523 | 8.3593 | 0.4524 | 8.2859 | 8.3167 |
| 3.121 | 27.0 | 1350 | 2.6565 | 8.3593 | 0.4524 | 8.2859 | 8.3167 |
| 3.1095 | 28.0 | 1400 | 2.6385 | 8.5833 | 0.4524 | 8.45 | 8.516 |
| 3.1113 | 29.0 | 1450 | 2.6378 | 7.6135 | 0.3333 | 7.5385 | 7.5885 |
| 3.0661 | 30.0 | 1500 | 2.6415 | 8.2734 | 0.3333 | 8.1583 | 8.25 |
| 3.0316 | 31.0 | 1550 | 2.6435 | 7.6135 | 0.3333 | 7.5385 | 7.5885 |
| 3.0468 | 32.0 | 1600 | 2.6342 | 7.6135 | 0.3333 | 7.5385 | 7.5885 |
| 3.0323 | 33.0 | 1650 | 2.6330 | 7.8333 | 0.4167 | 7.7551 | 7.8317 |
| 3.0031 | 34.0 | 1700 | 2.6332 | 8.1192 | 0.4167 | 8.0718 | 8.1167 |
| 2.9904 | 35.0 | 1750 | 2.6291 | 8.2734 | 0.3333 | 8.1583 | 8.25 |
| 2.9765 | 36.0 | 1800 | 2.6364 | 7.8667 | 0.4167 | 7.8167 | 7.8269 |
| 2.9872 | 37.0 | 1850 | 2.6267 | 7.9984 | 0.4167 | 7.875 | 7.9843 |
| 2.976 | 38.0 | 1900 | 2.6252 | 7.9984 | 0.4167 | 7.875 | 7.9843 |
| 2.9528 | 39.0 | 1950 | 2.6319 | 7.701 | 0.3333 | 7.7167 | 7.6769 |
| 2.9385 | 40.0 | 2000 | 2.6279 | 7.8667 | 0.4167 | 7.8167 | 7.8269 |
| 2.9371 | 41.0 | 2050 | 2.6227 | 7.4658 | 0.4167 | 7.4167 | 7.4397 |
| 2.9214 | 42.0 | 2100 | 2.6172 | 8.1355 | 0.4167 | 8.0537 | 8.1329 |
| 2.9472 | 43.0 | 2150 | 2.6133 | 8.1355 | 0.4167 | 8.0537 | 8.1329 |
| 2.9215 | 44.0 | 2200 | 2.6101 | 7.4516 | 0.1667 | 7.3718 | 7.3647 |
| 2.9188 | 45.0 | 2250 | 2.6097 | 7.4516 | 0.1667 | 7.3718 | 7.3647 |
| 2.9003 | 46.0 | 2300 | 2.6089 | 7.4516 | 0.1667 | 7.3718 | 7.3647 |
| 2.8926 | 47.0 | 2350 | 2.6137 | 7.7769 | 0.1667 | 7.6692 | 7.7272 |
| 2.8872 | 48.0 | 2400 | 2.6118 | 7.7769 | 0.1667 | 7.6692 | 7.7272 |
| 2.8809 | 49.0 | 2450 | 2.6089 | 7.247 | 0.1667 | 7.151 | 7.1897 |
| 2.8676 | 50.0 | 2500 | 2.6027 | 7.2881 | 0.1667 | 7.1551 | 7.1947 |
| 2.8792 | 51.0 | 2550 | 2.6131 | 7.1382 | 0.1667 | 7.0703 | 7.0476 |
| 2.8705 | 52.0 | 2600 | 2.6144 | 7.7 | 0.1667 | 7.5759 | 7.6113 |
| 2.8887 | 53.0 | 2650 | 2.6130 | 7.7 | 0.1667 | 7.5759 | 7.6113 |
| 2.872 | 54.0 | 2700 | 2.6080 | 7.7 | 0.1667 | 7.5759 | 7.6113 |
| 2.8593 | 55.0 | 2750 | 2.6093 | 7.2881 | 0.1667 | 7.1551 | 7.1947 |
| 2.868 | 56.0 | 2800 | 2.6091 | 7.8387 | 0.1667 | 7.6729 | 7.7334 |
| 2.8729 | 57.0 | 2850 | 2.6096 | 7.8387 | 0.1667 | 7.6729 | 7.7334 |
| 2.8526 | 58.0 | 2900 | 2.6100 | 7.1382 | 0.1667 | 7.0703 | 7.0476 |
| 2.8671 | 59.0 | 2950 | 2.6105 | 7.7 | 0.1667 | 7.5759 | 7.6113 |
| 2.8544 | 60.0 | 3000 | 2.6105 | 7.7 | 0.1667 | 7.5759 | 7.6113 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
AnonymousSub/consert-s10-AR | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: CodeBERTa-commit-message-autocomplete
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CodeBERTa-commit-message-autocomplete
This model is a fine-tuned version of [microsoft/codebert-base-mlm](https://huggingface.co/microsoft/codebert-base-mlm) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8906
- Accuracy: 0.6346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 4.5523 | 0.3432 |
| No log | 2.0 | 80 | 3.8711 | 0.3796 |
| No log | 3.0 | 120 | 3.2419 | 0.4503 |
| No log | 4.0 | 160 | 2.8709 | 0.4962 |
| No log | 5.0 | 200 | 2.6999 | 0.5085 |
| No log | 6.0 | 240 | 2.6622 | 0.5216 |
| No log | 7.0 | 280 | 2.5048 | 0.5410 |
| No log | 8.0 | 320 | 2.4249 | 0.5581 |
| No log | 9.0 | 360 | 2.3727 | 0.5623 |
| No log | 10.0 | 400 | 2.3625 | 0.5665 |
| No log | 11.0 | 440 | 2.3320 | 0.5706 |
| No log | 12.0 | 480 | 2.1704 | 0.5950 |
| 3.081 | 13.0 | 520 | 2.2109 | 0.5893 |
| 3.081 | 14.0 | 560 | 2.2330 | 0.5884 |
| 3.081 | 15.0 | 600 | 2.1454 | 0.5954 |
| 3.081 | 16.0 | 640 | 2.1740 | 0.5951 |
| 3.081 | 17.0 | 680 | 2.1219 | 0.5920 |
| 3.081 | 18.0 | 720 | 2.1136 | 0.6052 |
| 3.081 | 19.0 | 760 | 2.0586 | 0.6127 |
| 3.081 | 20.0 | 800 | 2.0185 | 0.6113 |
| 3.081 | 21.0 | 840 | 2.0493 | 0.6129 |
| 3.081 | 22.0 | 880 | 1.9766 | 0.6217 |
| 3.081 | 23.0 | 920 | 1.9968 | 0.6189 |
| 3.081 | 24.0 | 960 | 1.9567 | 0.6276 |
| 2.122 | 25.0 | 1000 | 1.9611 | 0.6269 |
| 2.122 | 26.0 | 1040 | 1.9437 | 0.6254 |
| 2.122 | 27.0 | 1080 | 1.9865 | 0.6266 |
| 2.122 | 28.0 | 1120 | 1.9112 | 0.6295 |
| 2.122 | 29.0 | 1160 | 1.8903 | 0.6292 |
| 2.122 | 30.0 | 1200 | 1.8992 | 0.6376 |
| 2.122 | 31.0 | 1240 | 1.9122 | 0.6327 |
| 2.122 | 32.0 | 1280 | 1.8906 | 0.6346 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
AnonymousSub/consert-s10-SR | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- art
- style
language:
- en
---
# Shinkai-Art ✨
Stable diffusion model Pretrained from `andite/anything-v4.0`.
This model can generate output like **Makoto Shinkai** (Japanese Anime Director) movies style image, his anime movies style deeply inspired me to create seperate model in his style.
Some Anime Movies from him is:
1. [Your Name](https://www.youtube.com/watch?v=xU47nhruN-Q)
2. [Weathering With You](https://www.youtube.com/watch?v=Q6iK6DjV_iE)
etc.
Dataset is too obtained from these same movies.
### Trigger Word:
`shinkai-art`, `shinkaiart`, `portrait`
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
### 💞 Send me Query at :
[](https://www.instagram.com/iamhemantindia)
### Sample pictures Generated using this Model:-







 |
AnonymousSub/declutr-model | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2023-01-19T19:53:12Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaK/my_food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaK/my_food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1163
- Validation Loss: 0.2927
- Train Accuracy: 0.936
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.5557 | 1.4200 | 0.897 | 0 |
| 0.8928 | 0.6662 | 0.931 | 1 |
| 0.3831 | 0.4001 | 0.938 | 2 |
| 0.1892 | 0.3486 | 0.93 | 3 |
| 0.1163 | 0.2927 | 0.936 | 4 |
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.9.2
- Datasets 2.8.0
- Tokenizers 0.13.2
|
AnonymousSub/declutr-model_squad2.0 | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | 2023-01-19T19:53:30Z | ---
language:
- ps
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Base Pashto - Augmented
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs
type: google/fleurs
config: ps_af
split: test
args: ps_af
metrics:
- name: Wer
type: wer
value: 59.64817110973342
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Pashto - Augmented
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the google/fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7901
- Wer: 59.6482
- Cer: 27.0947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 30
- training_steps: 600
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.1215 | 2.38 | 100 | 0.9444 | 68.3354 | 30.2694 |
| 0.8268 | 4.75 | 200 | 0.8267 | 63.2440 | 28.2636 |
| 0.6912 | 7.14 | 300 | 0.7959 | 62.2443 | 28.2123 |
| 0.5725 | 9.52 | 400 | 0.7896 | 60.5859 | 27.6920 |
| 0.5231 | 11.89 | 500 | 0.7884 | 59.8574 | 27.1273 |
| 0.4752 | 14.28 | 600 | 0.7901 | 59.6482 | 27.0947 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
AnonymousSub/declutr-s10-AR | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 644.50 +/- 162.32
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga redfungus -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga redfungus -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga redfungus
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
AnonymousSub/declutr-s10-SR | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 36 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-samsum-ElectrifAi_v9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-samsum-ElectrifAi_v9
This model is a fine-tuned version of [philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2325
- Rouge1: 55.1928
- Rouge2: 33.3871
- Rougel: 43.865
- Rougelsum: 54.1984
- Gen Len: 108.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 27 | 1.2252 | 55.969 | 34.0884 | 43.1389 | 54.7972 | 108.0 |
| No log | 2.0 | 54 | 1.2156 | 55.834 | 34.3509 | 43.5382 | 54.4829 | 102.8 |
| No log | 3.0 | 81 | 1.2325 | 55.1928 | 33.3871 | 43.865 | 54.1984 | 108.8667 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.2
|
AnonymousSub/dummy_2 | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 39 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### Meryl_Stryfe_20230119_1900_6000_steps on Stable Diffusion via Dreambooth
#### model by NickKolok
This your the Stable Diffusion model fine-tuned the Meryl_Stryfe_20230119_1900_6000_steps concept taught to Stable Diffusion with Dreambooth.
#It can be used by modifying the `instance_prompt`: **merylstryfetrigun**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:


















































|
AnonymousSub/dummy_2_parent | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ArtYac/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AnonymousSub/hier_triplet_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2023-01-19T20:12:18Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1416.37 +/- 369.61
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 617.50 +/- 121.66
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Krawcts -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Krawcts -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Krawcts
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_1_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | 2023-01-19T20:24:30Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: Honza/SnowballTarget1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
tags:
- conversational
license: mit
datasets:
- bigscience/opensubtitles
language:
- es
pipeline_tag: text-generation
---
## Finetuned DialoGPT model on Spanish Conversations
This model was finetuned from the original [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) model on subtitles from Spanish movies and telenovelas from the awesome [OpenSubtitle dataset](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/opensubtitles).
DialoGPT paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
Some example dialog from the model:
|Role | Response |
|---------|--------|
|User | ¿Cómo está el clima hoy? |
| Bot | Muy bien. |
|User | Eso es bueno escuchar. ¿Vas a ir a la playa? |
| Bot | No, también puedo regresar. |
|User | Oh bien, bueno, iré a la playa más tarde. |
| Bot | ¡No! |
|User | ¿Qué está mal con eso? |
| Bot | ¡Cómo que no! |
|User | Estoy confundido, ¿por qué no puedo ir a la playa? |
| Bot | ¡Cómo que no! |
|User | Explícamelo por favor. |
| Bot | ¡No! |
## Using the model
Example code for trying out the model (taken directly from the [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) model card):
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("emre/spanish-dialoGPT")
model = AutoModelWithLMHead.from_pretrained("emre/spanish-dialoGPT")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | Access to model twinaigo/twnnzzz is restricted and you are not in the authorized list. Visit https://huggingface.co/twinaigo/twnnzzz to ask for access. |
AnonymousSub/rule_based_hier_triplet_0.1_epochs_1_shard_1_squad2.0 | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: Honza/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AnonymousSub/rule_based_only_classfn_epochs_1_shard_1_squad2.0 | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-casedepoch3_sexist_baseline_with_reddit_and_gabfortest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-casedepoch3_sexist_baseline_with_reddit_and_gabfortest
This model is a fine-tuned version of [Wiebke/bert-base-casedepoch3_sexist_baseline_with_reddit_and_gab](https://huggingface.co/Wiebke/bert-base-casedepoch3_sexist_baseline_with_reddit_and_gab) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Appolo/TestModel | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- fleurs
metrics:
- wer
model-index:
- name: whisper-training-blog
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: fleurs
type: fleurs
config: sv_se
split: validation
args: sv_se
metrics:
- name: Wer
type: wer
value: 180.05748044068338
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-training-blog
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0050
- Wer: 180.0575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.3
- training_steps: 448
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4112 | 0.1 | 44 | 1.4919 | 245.3457 |
| 1.0502 | 0.2 | 88 | 1.2255 | 220.1501 |
| 0.9033 | 0.29 | 132 | 1.1203 | 206.2430 |
| 0.8141 | 1.06 | 176 | 1.0675 | 201.9639 |
| 0.8029 | 1.16 | 220 | 1.0394 | 178.3650 |
| 0.6324 | 1.25 | 264 | 1.0301 | 221.2997 |
| 0.6972 | 2.02 | 308 | 1.0134 | 176.6725 |
| 0.6052 | 2.12 | 352 | 1.0065 | 194.7150 |
| 0.6047 | 2.21 | 396 | 1.0030 | 160.9133 |
| 0.5849 | 2.31 | 440 | 1.0050 | 180.0575 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cu118
- Datasets 2.10.1
- Tokenizers 0.13.3
|
ArBert/albert-base-v2-finetuned-ner-agglo-twitter | [
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1612873364737036298/QywWNivj_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Science&Technology🖖</div>
<div style="text-align: center; font-size: 14px;">@sakhaleta</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Science&Technology🖖.
| Data | Science&Technology🖖 |
| --- | --- |
| Tweets downloaded | 258 |
| Retweets | 56 |
| Short tweets | 32 |
| Tweets kept | 170 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wa5buitk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sakhaleta's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/q0yrohjp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/q0yrohjp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sakhaleta')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ArBert/roberta-base-finetuned-ner-agglo | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery_ex2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.72 +/- 0.45
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Gumibit/q-FrozenLake-v1-4x4-noSlippery_ex2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ArashEsk95/bert-base-uncased-finetuned-cola | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- pt
thumbnail: Portuguese BERT for the Legal Domain
tags:
- sentence-transformers
- transformers
- bert
- pytorch
- sentence-similarity
license: mit
pipeline_tag: sentence-similarity
datasets:
- stjiris/portuguese-legal-sentences-v0
- assin
- assin2
- stsb_multi_mt
widget:
- source_sentence: "O advogado apresentou as provas ao juíz."
sentences:
- "O juíz leu as provas."
- "O juíz leu o recurso."
- "O juíz atirou uma pedra."
model-index:
- name: BERTimbau
results:
- task:
name: STS
type: STS
metrics:
- name: Pearson Correlation - assin Dataset
type: Pearson Correlation
value: 0.7800806555562139
- name: Pearson Correlation - assin2 Dataset
type: Pearson Correlation
value: 0.841456941132706
- name: Pearson Correlation - stsb_multi_mt pt Dataset
type: Pearson Correlation
value: 0.8506042636740455
---
[](https://www.inesc-id.pt/projects/PR07005/)
[](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/)
Work developed as part of [Project IRIS](https://www.inesc-id.pt/projects/PR07005/).
Thesis: [A Semantic Search System for Supremo Tribunal de Justiça](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/)
# stjiris/bert-large-portuguese-cased-legal-tsdae-mkd-nli-sts-v0 (Legal BERTimbau)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
stjiris/bert-large-portuguese-cased-legal-tsdae-mkd-nli-sts-v0 derives from stjiris/bert-large-portuguese-cased-legal-mlm (legal variant of [BERTimbau](https://huggingface.co/neuralmind/bert-large-portuguese-cased) large).
It was trained using the TSDAE technique with a learning rate 1e-5 [Legal Sentences from +-30000 documents](https://huggingface.co/datasets/stjiris/portuguese-legal-sentences-v1.0) 21.2k training steps (best performance for our semantic search system implementation)
This model was subjected to Multilingual Knowledge Distillation technique (mkd). For the Multilingual Knowledge Distillation process, the teacher model was 'sentence-transformers/stsb-roberta-large', the supposed supported language as English and the language to learn was portuguese
The dataset used was: TED 2020 – Parallel Sentences Corpus. TED 2020 contains around 4000 TED and TED-X transcripts from July 2020. These transcripts were translated by volunteers into more than 100 languages, adding up to a total of 10 544 174 sentences.
The model was presented to NLI data. 16 batch size, 2e-5 lr
It was trained for Semantic Textual Similarity, being submitted to a fine tuning stage with the [assin](https://huggingface.co/datasets/assin), [assin2](https://huggingface.co/datasets/assin2), [stsb_multi_mt pt](https://huggingface.co/datasets/stsb_multi_mt) datasets. 'lr': 1e-5
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Isto é um exemplo", "Isto é um outro exemplo"]
model = SentenceTransformer('stjiris/bert-large-portuguese-cased-legal-tsdae-mkd-nli-sts-v0')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('stjiris/bert-large-portuguese-cased-legal-tsdae-mkd-nli-sts-v0')
model = AutoModel.from_pretrained('stjiris/bert-large-portuguese-cased-legal-tsdae-mkd-nli-sts-v0')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1028, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
If you use this work, please cite:
```bibtex
@inproceedings{MeloSemantic,
author = {Melo, Rui and Santos, Professor Pedro Alexandre and Dias, Professor Jo{\~ a}o},
title = {A {Semantic} {Search} {System} for {Supremo} {Tribunal} de {Justi}{\c c}a},
}
@inproceedings{souza2020bertimbau,
author = {F{\'a}bio Souza and
Rodrigo Nogueira and
Roberto Lotufo},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
@inproceedings{fonseca2016assin,
title={ASSIN: Avaliacao de similaridade semantica e inferencia textual},
author={Fonseca, E and Santos, L and Criscuolo, Marcelo and Aluisio, S},
booktitle={Computational Processing of the Portuguese Language-12th International Conference, Tomar, Portugal},
pages={13--15},
year={2016}
}
@inproceedings{real2020assin,
title={The assin 2 shared task: a quick overview},
author={Real, Livy and Fonseca, Erick and Oliveira, Hugo Goncalo},
booktitle={International Conference on Computational Processing of the Portuguese Language},
pages={406--412},
year={2020},
organization={Springer}
}
@InProceedings{huggingface:dataset:stsb_multi_mt,
title = {Machine translated multilingual STS benchmark dataset.},
author={Philip May},
year={2021},
url={https://github.com/PhilipMay/stsb-multi-mt}
}
``` |
Arcanos/1 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.84 +/- 16.25
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Arcktosh/DialoGPT-small-rick | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -154.71 +/- 21.18
name: mean_reward
verified: false
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AriakimTaiyo/DialoGPT-cultured-Kumiko | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -221.35 +/- 146.02
name: mean_reward
verified: false
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ArthurcJP/DialoGPT-small-YODA | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-20T01:04:09Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 855.88 +/- 62.66
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
|
AshLukass/AshLukass | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: final_five_class_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final_five_class_classification
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1000
- F1: 0.9566
- Roc Auc: 0.9664
- Accuracy: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 250 | 0.2560 | 0.8761 | 0.9004 | 0.665 |
| 0.3126 | 2.0 | 500 | 0.2082 | 0.8966 | 0.9177 | 0.7025 |
| 0.3126 | 3.0 | 750 | 0.1879 | 0.9024 | 0.9254 | 0.705 |
| 0.2165 | 4.0 | 1000 | 0.1654 | 0.9166 | 0.9348 | 0.755 |
| 0.2165 | 5.0 | 1250 | 0.1403 | 0.9346 | 0.9500 | 0.7975 |
| 0.1619 | 6.0 | 1500 | 0.1288 | 0.9394 | 0.9523 | 0.815 |
| 0.1619 | 7.0 | 1750 | 0.1112 | 0.9515 | 0.9614 | 0.855 |
| 0.1161 | 8.0 | 2000 | 0.1112 | 0.9492 | 0.9585 | 0.8575 |
| 0.1161 | 9.0 | 2250 | 0.1029 | 0.9536 | 0.9631 | 0.8725 |
| 0.086 | 10.0 | 2500 | 0.1000 | 0.9566 | 0.9664 | 0.875 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.1+cpu
- Datasets 2.8.0
- Tokenizers 0.12.1
|
Aybars/XLM_Turkish | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2023-01-20T01:38:42Z | ---
language:
- he
---
## Description
An experimental model for Hebrew with pruned embeddings of the mT5-base model |
Ayham/robertagpt2_xsum4 | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: tuned_cair_five_classes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tuned_cair_five_classes
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0976
- F1: 0.9767
- Roc Auc: 0.9815
- Accuracy: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 250 | 0.1394 | 0.9480 | 0.9577 | 0.845 |
| 0.1037 | 2.0 | 500 | 0.1221 | 0.95 | 0.9577 | 0.87 |
| 0.1037 | 3.0 | 750 | 0.1107 | 0.9598 | 0.9680 | 0.8875 |
| 0.0593 | 4.0 | 1000 | 0.0965 | 0.9662 | 0.9728 | 0.905 |
| 0.0593 | 5.0 | 1250 | 0.0872 | 0.9734 | 0.9787 | 0.92 |
| 0.0352 | 6.0 | 1500 | 0.0824 | 0.9753 | 0.9802 | 0.925 |
| 0.0352 | 7.0 | 1750 | 0.0906 | 0.9701 | 0.9759 | 0.915 |
| 0.02 | 8.0 | 2000 | 0.0900 | 0.9734 | 0.9787 | 0.925 |
| 0.02 | 9.0 | 2250 | 0.0930 | 0.9727 | 0.9776 | 0.9225 |
| 0.0141 | 10.0 | 2500 | 0.0932 | 0.9734 | 0.9787 | 0.9175 |
| 0.0141 | 11.0 | 2750 | 0.0925 | 0.9760 | 0.9808 | 0.9275 |
| 0.0098 | 12.0 | 3000 | 0.0964 | 0.9741 | 0.9798 | 0.93 |
| 0.0098 | 13.0 | 3250 | 0.0964 | 0.9747 | 0.9798 | 0.9275 |
| 0.0069 | 14.0 | 3500 | 0.0981 | 0.9734 | 0.9787 | 0.925 |
| 0.0069 | 15.0 | 3750 | 0.0930 | 0.9767 | 0.9815 | 0.9325 |
| 0.0058 | 16.0 | 4000 | 0.0939 | 0.9767 | 0.9815 | 0.9325 |
| 0.0058 | 17.0 | 4250 | 0.0959 | 0.9767 | 0.9815 | 0.935 |
| 0.0048 | 18.0 | 4500 | 0.0972 | 0.9753 | 0.9799 | 0.925 |
| 0.0048 | 19.0 | 4750 | 0.0971 | 0.9767 | 0.9815 | 0.9325 |
| 0.0042 | 20.0 | 5000 | 0.0976 | 0.9767 | 0.9815 | 0.9325 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Ayham/xlnet_gpt2_summarization_xsum | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
language:
- he
---
An experimental model for Hebrew with pruned embeddings of the mT5-large model |
Ayran/DialoGPT-small-harry-potter-1-through-3 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | 2023-01-20T02:22:54Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 44.20 +/- 37.94
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AyushPJ/ai-club-inductions-21-nlp-roBERTa | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -68.19 +/- 22.45
name: mean_reward
verified: false
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Azaghast/GPT2-SCP-Descriptions | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- animal
widget:
- text: a photo of fluffalpaca llama in front of the Colosseum in Rome
---
# DreamBooth model for the fluffalpaca concept trained on the CCMat/db-aplaca dataset.
This is a Stable Diffusion model fine-tuned on the fluffalpaca concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of fluffalpaca llama**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `llama` images for the animal theme.<br>
### Training Hyperparemeters
Pretrained Model: [stabilityai/stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2)<br>
Learning rate: 1e-6<br>
Steps:1078<br>
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('CCMat/fluffalpaca-llama-1078')
image = pipeline().images[0]
image
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.