modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Davlan/mbart50-large-eng-yor-mt | [
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.91 +/- 19.75
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Davlan/mt5-small-en-pcm | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="toinsson/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Davlan/mt5-small-pcm-en | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-12-21T12:11:17Z | ---
license: apache-2.0
tags:
- google/fleurs
- generated_from_trainer
- automatic-speech-recognition
- hf-asr-leaderboard
- pashto
- ps
datasets:
- fleurs
metrics:
- wer
model-index:
- name: facebook/wav2vec2-xls-r-300m
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs
type: google/fleurs
args: 'config: ps_af, split: test'
metrics:
- name: Wer
type: wer
value: 0.5159447476125512
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# facebook/wav2vec2-xls-r-300m
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/FLEURS - PS_AF dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9162
- Wer: 0.5159
- Cer: 0.1972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 6000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:------:|:---------------:|:------:|
| 5.0767 | 6.33 | 500 | 1.0 | 4.8783 | 1.0 |
| 3.1156 | 12.66 | 1000 | 1.0 | 3.0990 | 1.0 |
| 1.3506 | 18.99 | 1500 | 0.2889 | 1.1056 | 0.7031 |
| 0.9997 | 25.32 | 2000 | 0.2301 | 0.9191 | 0.5944 |
| 0.7838 | 31.65 | 2500 | 0.2152 | 0.8952 | 0.5556 |
| 0.6665 | 37.97 | 3000 | 0.2017 | 0.8908 | 0.5252 |
| 0.6265 | 44.3 | 3500 | 0.1954 | 0.9063 | 0.5133 |
| 0.5935 | 50.63 | 4000 | 0.1969 | 0.9162 | 0.5156 |
| 0.5174 | 56.96 | 4500 | 0.1972 | 0.9287 | 0.5140 |
| 0.5462 | 63.29 | 5000 | 0.1974 | 0.9370 | 0.5138 |
| 0.5564 | 69.62 | 5500 | 0.1977 | 0.9461 | 0.5148 |
| 0.5252 | 75.95 | 6000 | 0.9505 | 0.5118 | 0.1969 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
Davlan/mt5_base_eng_yor_mt | [
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-profane-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-profane-final
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2773
- Accuracy: 0.8992
- Precision: 0.8261
- Recall: 0.7987
- F1: 0.8114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 296 | 0.2862 | 0.8907 | 0.8230 | 0.7528 | 0.7807 |
| 0.3379 | 2.0 | 592 | 0.2650 | 0.9097 | 0.8748 | 0.7778 | 0.8148 |
| 0.3379 | 3.0 | 888 | 0.2632 | 0.9049 | 0.8417 | 0.7999 | 0.8185 |
| 0.221 | 4.0 | 1184 | 0.2772 | 0.8916 | 0.8055 | 0.8055 | 0.8055 |
| 0.221 | 5.0 | 1480 | 0.2773 | 0.8992 | 0.8261 | 0.7987 | 0.8114 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Davlan/mt5_base_yor_eng_mt | [
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: true
pipeline_tag: text-to-image
---
# i modelli 'Inizio'
<img src=https://ac-p.namu.la/20230106sac/980c715bd45a7fb0cfe8ce06715c6c4034825fa4d4d07267a5585eb709f2cefa.png
width=100% height=100%>
Inizio is the series of custom mixed models. Based on many released open-source models, and support .safetensors format only.
[WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui)-amicable
## Summary
This model repository includes 7 models currently:
1. *Inizio Fantasma*: Blossom mix+[Anything 3.0](https://huggingface.co/Linaqruf/anything-v3.0)+[SamDoesArtsUltMerge](https://civitai.com/models/68/samdoesarts-ultmerge); weighted, M=0.2. Quietly impressive semi-realistic model.
2. *Inizio Inseguitore*: [ALG](https://arca.live/b/aiart/64297100)+[SamDoesArtsUltMerge](https://civitai.com/models/68/samdoesarts-ultmerge)+Blossom Mix; add difference, M=0.3. Similar to Inizio Fantasma, but anime-style focused.
3. *Inizio Foschia*: [ALG](https://arca.live/b/aiart/64297100)+Inizio Fantasma+[A Certain Model](https://huggingface.co/JosephusCheung/ACertainModel); weighted, M=0.7. Similar to Inizio Inseguitore.
4. *Inizio Replicante*: Inizio Foschia+[DBmai](https://tieba.baidu.com/p/8136937175)+[Finale 5o](https://arca.live/b/aiart/65251337); weighted, M=0.5. Well-tuned Semi-Realistic Anime model. The most fashionable.
5. *Inizio Skinjob*: Inizio Replicante+[Berry Mix](https://rentry.co/LFTBL)+[ElyOrange](https://huggingface.co/WarriorMama777/OrangeMixs); weighted, M=0.6. Well-tuned Semi-Realistic Anime model.
6. *Inizio Deckard*: ([Kribo Nstal](https://civitai.com/models/1225/kribomix-nstal)+Inizio Skinjob; weighted, M=0.5)+Inizio Fantasma+Kribo Nstal; weighted, M=0.5.
7. *Inizio Unico* Inizio Fantasma+Inizio Inseguitore+Inizio Foschia+Inizio Replicante+Inizio Skinjob+Inizio Deckard; weighted, M=1/6. The most advanced Inizio model.
## Recommend Settings (Especially for Inizio Unico)
- Variable Automatic Encoder: [SD MSE 840k.vae.pt](https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt)
- Embedding:[bad_prompt_ver2](https://huggingface.co/datasets/Nerfgun3/bad_prompt/resolve/main/bad_prompt_version2.pt)
- Clip skip: 3
- Prompt: object-related tag first, quality-related tag later; [prompt list](https://docs.google.com/document/d/1X26U00Pxsqmyi47RjxDBMDLCcmyN23z-NEDBgd-vW-c/edit?usp=sharing)
- Resolution: 1024x576->1366x768 w/ HighRes. Fix
- HighRes. Fix: Latent or ESRGAN; upscale by 2
## Sample Image
> <img src=https://ac2-p.namu.la/20230106sac2/39fd2e16ccf9b0f0254fb97993a575783c8f58a0031c86c54e55ba116e9e21fb.png
> width=100% height=100%>
> ▲ X / Y Plot #1
>
> <img src=https://ac-p.namu.la/20230106sac/5ded2216805dcbadd5ad581497c881582e8234a2f0ca3200163e89d9d86d8443.png
> width=100% height=100%>
> ▲ X / Y Plot #2
>
> <img src=https://ac-p.namu.la/20230106sac/d9ac67ac785d8e70c759eac374a9328a5548a1f4f9ed1a4a15dd842eb0ae6f20.png
> width=100% height=100%>
> ▲ X / Y Plot #3
>
> <img src=https://ac-p.namu.la/20230106sac/fa3ff5f04a30c8ef68d4e937c1511d6806d3d6b0bb27e6bcc4dba056ad5d6b76.png
> width=30% height=30%>
> ▲ Inizio Skinjob
>
> <img src=https://ac-p.namu.la/20230106sac/17be3a43666816c43776455f31a04f47b01817632b53fb67341444d11318dfd3.png
> width=30% height=30%>
> <img src=https://ac-p.namu.la/20230108sac/ab8169d92f6c9a3bcc47a4cb3138c92bd41bfb794e0020ec3e601834ebd30252.png
> width=30% height=30%>
> <img src=https://ac-p.namu.la/20230108sac/2dbb20abadbed8734fb7b4a8fb1c77485e25de9bfdb671bb91e15d9853cbf972.png
> width=30% height=30%>
> <img src=https://ac-p.namu.la/20230108sac/269e9b1e7f0c9eb2db7c3b5100c1cce6fc75df5fe8c2f02499db55e4b18943f4.jpg
> width=100% height=100%>
> ▲ Inizio Unico
>
## License Information
This model follows Creative ML Open RAIL-M: [Stable Diffusion License](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
But, You may use whatever you want. I don't like to set such restriction.
## Contact
*[email protected]* or [*Find Cinnamomo on AI Art Channel*](https://arca.live/b/aiart) |
Davlan/xlm-roberta-base-finetuned-english | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 43680 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 8736,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Davlan/xlm-roberta-base-finetuned-hausa | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 234 | null | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-Basic
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 16.20 +/- 20.08
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Davlan/xlm-roberta-base-finetuned-kinyarwanda | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 61 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 284.82 +/- 13.27
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
import gym
from stable_baselines3 import PPO
from stable_baselines3.common.vec_env import DummyVecEnv
from stable_baselines3.common.env_util import make_vec_env
from huggingface_sb3 import package_to_hub
from huggingface_sb3 import load_from_hub
repo_id = "raghuvamsidhar/ppo-LunarLander-v2" # The repo_id
filename = "PPO-LunarLander-v2-RVD.zip" # The model filename.zip
# When the model was trained on Python 3.8 the pickle protocol is 5
# But Python 3.6, 3.7 use protocol 4
# In order to get compatibility we need to:
# 1. Install pickle5 (we done it at the beginning of the colab)
# 2. Create a custom empty object we pass as parameter to PPO.load()
custom_objects = {
"learning_rate": 0.0,
"lr_schedule": lambda _: 0.0,
"clip_range": lambda _: 0.0,
}
checkpoint = load_from_hub(repo_id, filename)
model = PPO.load(checkpoint, custom_objects=custom_objects, print_system_info=True)
...
```
|
Davlan/xlm-roberta-base-finetuned-lingala | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-12-21T12:51:33Z | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 450
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Davlan/xlm-roberta-base-finetuned-shona | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language:
- or
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Odia
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 or
type: mozilla-foundation/common_voice_11_0
config: or
split: test
args: or
metrics:
- name: Wer
type: wer
value: 26.600846262341328
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Odia
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 or dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4786
- Wer: 26.6008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0001 | 24.01 | 250 | 0.4786 | 26.6008 |
| 0.0 | 49.01 | 500 | 0.5252 | 26.9394 |
| 0.0 | 74.01 | 750 | 0.5534 | 27.1368 |
| 0.0 | 99.01 | 1000 | 0.5644 | 26.9958 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
Davlan/xlm-roberta-base-finetuned-somali | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: creativeml-openrail-m
language:
- en
thumbnail: "https://static.miraheze.org/intercriaturaswiki/2/2c/Dussian_model.png"
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-diffusers
#datasets:
#- Name/name
inference: true
widget:
- text: alien insects, insectoid, scarab, alien, humanoid, insects, with tails and wings, very strange creatures
- text: A dussian, insect
--- |
Davlan/xlm-roberta-base-finetuned-wolof | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-12-21T13:18:03Z | ---
model-index:
- name: Sociovestix/lenu_FI
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: lenu
type: Sociovestix/lenu
config: FI
split: test
revision: fbe0b4b5b8d6950c10f5710f2c987728635a4afe
metrics:
- type: f1
value: 0.9853882549401615
name: f1
- type: f1
value: 0.7040205972185964
name: f1 macro
args:
average: macro
widget:
- text: "BetterMe Finland Oy"
- text: "Kotkan Kuurojen Yhdistys ry"
- text: "Kunnallisalan kehittämissäätiö sr"
- text: "Eero Kurvinen Ky"
- text: "OP-Tavoite 2 -sijoitusrahasto"
- text: "Asunto Oy Mertamäki"
- text: "Uponor Oyj"
- text: "Keskinäinen Kiinteistö Oy Keijo Lounala"
- text: "Kuusamon kaupunki"
- text: "Ilmajoen Musiikkijuhlat ry"
- text: "Hanne ja Timo Similä avoin yhtiö"
- text: "Vehmersalmen Osuuspankki"
- text: "Juvan seurakunta"
- text: "Nesteen Henkilöstörahasto hr"
- text: "Suur-Seudun Osuuskauppa SSO"
- text: "Tmi Ida Lehmus"
- text: "Pohjois-Karjalan sosiaali- ja terveyspalvelujen kuntayhtymä"
- text: "Finlaysonin Forssan Tehtaitten Sairauskassa"
- text: "Suomen Punainen Risti Savo-Karjalan piiri"
- text: "L-Fashion Group Oy:n Eläkesäätiö"
- text: "Gummandouran Osakaskunta"
- text: "Suomen Keskinäinen Lääkevahinkovakuutusyhtiö"
- text: "Työttömyyskassojen Tukikassa"
- text: "Kuntien Takauskeskus"
- text: "LUT-yliopiston ylioppilaskunta"
- text: "Huoltovarmuuskeskus"
- text: "Pohjois-Karjalan kauppakamari ry"
- text: "Turun katolinen seurakunta (Pyhän Birgitan ja Autuaan Hemminginseurakunta)"
- text: "Närpes Sjukvårdsfond"
- text: "Kvevlax Sparbank"
- text: "Valion Eläkekassa"
- text: "Yhteismetsä Visa"
- text: "Maatalousyhtymä Niinivehmas & Tallila"
- text: "Suomen Vahinkovakuutus Oy"
- text: "Pyhän Marian katolinen seurakunta"
- text: "Kumpulainen Elma Sofia kuolinpesä"
- text: "Afarak Group SE"
- text: "Nordiska Miljöfinansieringsbolaget, Pohjoismaiden ympäristörahoitusyhtiö NEFCO, Nordic Environment Finance Corporation"
- text: "Österbottens Fiskeriförsäkringsförening"
- text: "Joensuun ortodoksinen seurakunta"
- text: "Scandia Mink Ab konkursbo"
---
# LENU - Legal Entity Name Understanding for Finland
A [finnish Bert](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model fine-tuned on finnish legal entity names (jurisdiction FI) from the Global [Legal Entity Identifier](https://www.gleif.org/en/about-lei/introducing-the-legal-entity-identifier-lei)
(LEI) System with the goal to detect [Entity Legal Form (ELF) Codes](https://www.gleif.org/en/about-lei/code-lists/iso-20275-entity-legal-forms-code-list).
---------------
<h1 align="center">
<a href="https://gleif.org">
<img src="http://sdglabs.ai/wp-content/uploads/2022/07/gleif-logo-new.png" width="220px" style="display: inherit">
</a>
</h1><br>
<h3 align="center">in collaboration with</h3>
<h1 align="center">
<a href="https://sociovestix.com">
<img src="https://sociovestix.com/img/svl_logo_centered.svg" width="700px" style="width: 100%">
</a>
</h1><br>
---------------
## Model Description
<!-- Provide a longer summary of what this model is. -->
The model has been created as part of a collaboration of the [Global Legal Entity Identifier Foundation](https://gleif.org) (GLEIF) and
[Sociovestix Labs](https://sociovestix.com) with the goal to explore how Machine Learning can support in detecting the ELF Code solely based on an entity's legal name and legal jurisdiction.
See also the open source python library [lenu](https://github.com/Sociovestix/lenu), which supports in this task.
The model has been trained on the dataset [lenu](https://huggingface.co/datasets/Sociovestix), with a focus on finnish legal entities and ELF Codes within the Jurisdiction "FI".
- **Developed by:** [GLEIF](https://gleif.org) and [Sociovestix Labs](https://huggingface.co/Sociovestix)
- **License:** Creative Commons (CC0) license
- **Finetuned from model [optional]:** TurkuNLP/bert-base-finnish-cased-v1
- **Resources for more information:** [Press Release](https://www.gleif.org/en/newsroom/press-releases/machine-learning-new-open-source-tool-developed-by-gleif-and-sociovestix-labs-enables-organizations-everywhere-to-automatically-)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
An entity's legal form is a crucial component when verifying and screening organizational identity.
The wide variety of entity legal forms that exist within and between jurisdictions, however, has made it difficult for large organizations to capture legal form as structured data.
The Jurisdiction specific models of [lenu](https://github.com/Sociovestix/lenu), trained on entities from
GLEIF’s Legal Entity Identifier (LEI) database of over two million records, will allow banks,
investment firms, corporations, governments, and other large organizations to retrospectively analyze
their master data, extract the legal form from the unstructured text of the legal name and
uniformly apply an ELF code to each entity type, according to the ISO 20275 standard.
# Licensing Information
This model, which is trained on LEI data, is available under Creative Commons (CC0) license.
See [gleif.org/en/about/open-data](https://gleif.org/en/about/open-data).
# Recommendations
Users should always consider the score of the suggested ELF Codes. For low score values it may be necessary to manually review the affected entities. |
Davlan/xlm-roberta-base-finetuned-xhosa | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
language:
- ru
tags:
- Transformers
- bert
pipeline_tag: fill-mask
thumbnail: https://github.com/sberbank-ai/model-zoo
license: apache-2.0
widget:
- text: Применение метода <mask> векторов для решения задач классификации
---
# ruSciBERT
Model was trained by Sber AI team and MLSA Lab of Institute for AI, MSU.
If you use our model for your project, please tell us about it ([[email protected]]([email protected])).
[Presentation at the AI Journey 2022](https://ai-journey.ru/archive/?year=2022&video=https://vk.com/video_ext.phpq3u4e5st6io8nm7a0rkoid=-22522055a2n3did=456242496a2n3dhash=ae9efe06acf647fd)
* Task: `mask filling`
* Type: `encoder`
* Tokenizer: `bpe`
* Dict size: `50265`
* Num Parameters: `123 M`
* Training Data Volume: `6.5 GB` |
Davlan/xlm-roberta-base-finetuned-yoruba | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- wer
model-index:
- name: model_syllable_onSet2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_syllable_onSet2
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4231
- 0 Precision: 1.0
- 0 Recall: 0.96
- 0 F1-score: 0.9796
- 0 Support: 25
- 1 Precision: 0.9643
- 1 Recall: 0.9643
- 1 F1-score: 0.9643
- 1 Support: 28
- 2 Precision: 1.0
- 2 Recall: 0.9643
- 2 F1-score: 0.9818
- 2 Support: 28
- 3 Precision: 0.8889
- 3 Recall: 1.0
- 3 F1-score: 0.9412
- 3 Support: 16
- Accuracy: 0.9691
- Macro avg Precision: 0.9633
- Macro avg Recall: 0.9721
- Macro avg F1-score: 0.9667
- Macro avg Support: 97
- Weighted avg Precision: 0.9714
- Weighted avg Recall: 0.9691
- Weighted avg F1-score: 0.9695
- Weighted avg Support: 97
- Wer: 0.2827
- Mtrix: [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 27, 0, 1], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 70
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | 0 Precision | 0 Recall | 0 F1-score | 0 Support | 1 Precision | 1 Recall | 1 F1-score | 1 Support | 2 Precision | 2 Recall | 2 F1-score | 2 Support | 3 Precision | 3 Recall | 3 F1-score | 3 Support | Accuracy | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Macro avg Support | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score | Weighted avg Support | Wer | Mtrix |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:--------:|:-------------------:|:----------------:|:------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------------:|:--------------------:|:------:|:--------------------------------------------------------------------------------------:|
| 1.3102 | 4.16 | 100 | 1.2133 | 0.125 | 0.04 | 0.0606 | 25 | 0.0 | 0.0 | 0.0 | 28 | 0.3146 | 1.0 | 0.4786 | 28 | 0.0 | 0.0 | 0.0 | 16 | 0.2990 | 0.1099 | 0.26 | 0.1348 | 97 | 0.1230 | 0.2990 | 0.1538 | 97 | 0.9676 | [[0, 1, 2, 3], [0, 1, 0, 24, 0], [1, 7, 0, 21, 0], [2, 0, 0, 28, 0], [3, 0, 0, 16, 0]] |
| 0.7368 | 8.33 | 200 | 0.7100 | 1.0 | 0.72 | 0.8372 | 25 | 0.3333 | 0.0357 | 0.0645 | 28 | 0.3684 | 1.0 | 0.5385 | 28 | 0.0 | 0.0 | 0.0 | 16 | 0.4845 | 0.4254 | 0.4389 | 0.3600 | 97 | 0.4603 | 0.4845 | 0.3898 | 97 | 0.8227 | [[0, 1, 2, 3], [0, 18, 2, 5, 0], [1, 0, 1, 27, 0], [2, 0, 0, 28, 0], [3, 0, 0, 16, 0]] |
| 0.3813 | 12.49 | 300 | 0.3802 | 0.8519 | 0.92 | 0.8846 | 25 | 0.7333 | 0.7857 | 0.7586 | 28 | 0.9231 | 0.8571 | 0.8889 | 28 | 0.9286 | 0.8125 | 0.8667 | 16 | 0.8454 | 0.8592 | 0.8438 | 0.8497 | 97 | 0.8509 | 0.8454 | 0.8465 | 97 | 0.7694 | [[0, 1, 2, 3], [0, 23, 2, 0, 0], [1, 4, 22, 2, 0], [2, 0, 3, 24, 1], [3, 0, 3, 0, 13]] |
| 0.2761 | 16.65 | 400 | 0.2263 | 1.0 | 1.0 | 1.0 | 25 | 1.0 | 0.9643 | 0.9818 | 28 | 1.0 | 0.9643 | 0.9818 | 28 | 0.8889 | 1.0 | 0.9412 | 16 | 0.9794 | 0.9722 | 0.9821 | 0.9762 | 97 | 0.9817 | 0.9794 | 0.9798 | 97 | 0.4392 | [[0, 1, 2, 3], [0, 25, 0, 0, 0], [1, 0, 27, 0, 1], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.1596 | 20.82 | 500 | 0.2283 | 1.0 | 0.96 | 0.9796 | 25 | 0.9310 | 0.9643 | 0.9474 | 28 | 0.9643 | 0.9643 | 0.9643 | 28 | 0.9375 | 0.9375 | 0.9375 | 16 | 0.9588 | 0.9582 | 0.9565 | 0.9572 | 97 | 0.9595 | 0.9588 | 0.9589 | 97 | 0.4971 | [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 27, 1, 0], [2, 0, 0, 27, 1], [3, 0, 1, 0, 15]] |
| 0.124 | 24.98 | 600 | 0.1841 | 1.0 | 0.96 | 0.9796 | 25 | 0.9655 | 1.0 | 0.9825 | 28 | 1.0 | 0.9643 | 0.9818 | 28 | 0.9412 | 1.0 | 0.9697 | 16 | 0.9794 | 0.9767 | 0.9811 | 0.9784 | 97 | 0.9803 | 0.9794 | 0.9794 | 97 | 0.2955 | [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 28, 0, 0], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.1162 | 29.16 | 700 | 0.2286 | 1.0 | 0.96 | 0.9796 | 25 | 0.9333 | 1.0 | 0.9655 | 28 | 1.0 | 0.9286 | 0.9630 | 28 | 0.9412 | 1.0 | 0.9697 | 16 | 0.9691 | 0.9686 | 0.9721 | 0.9694 | 97 | 0.9711 | 0.9691 | 0.9691 | 97 | 0.3627 | [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 28, 0, 0], [2, 0, 1, 26, 1], [3, 0, 0, 0, 16]] |
| 0.1576 | 33.33 | 800 | 0.2259 | 1.0 | 0.92 | 0.9583 | 25 | 0.9333 | 1.0 | 0.9655 | 28 | 1.0 | 0.9643 | 0.9818 | 28 | 0.9412 | 1.0 | 0.9697 | 16 | 0.9691 | 0.9686 | 0.9711 | 0.9688 | 97 | 0.9711 | 0.9691 | 0.9691 | 97 | 0.3210 | [[0, 1, 2, 3], [0, 23, 2, 0, 0], [1, 0, 28, 0, 0], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.0957 | 37.49 | 900 | 0.2757 | 1.0 | 0.96 | 0.9796 | 25 | 0.9643 | 0.9643 | 0.9643 | 28 | 0.9643 | 0.9643 | 0.9643 | 28 | 0.9412 | 1.0 | 0.9697 | 16 | 0.9691 | 0.9674 | 0.9721 | 0.9695 | 97 | 0.9697 | 0.9691 | 0.9691 | 97 | 0.3499 | [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 27, 1, 0], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.1145 | 41.65 | 1000 | 0.2951 | 1.0 | 0.96 | 0.9796 | 25 | 1.0 | 0.9643 | 0.9818 | 28 | 1.0 | 0.9643 | 0.9818 | 28 | 0.8421 | 1.0 | 0.9143 | 16 | 0.9691 | 0.9605 | 0.9721 | 0.9644 | 97 | 0.9740 | 0.9691 | 0.9701 | 97 | 0.3024 | [[0, 1, 2, 3], [0, 24, 0, 0, 1], [1, 0, 27, 0, 1], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.121 | 45.82 | 1100 | 0.3262 | 1.0 | 0.96 | 0.9796 | 25 | 1.0 | 0.9643 | 0.9818 | 28 | 1.0 | 0.9643 | 0.9818 | 28 | 0.8421 | 1.0 | 0.9143 | 16 | 0.9691 | 0.9605 | 0.9721 | 0.9644 | 97 | 0.9740 | 0.9691 | 0.9701 | 97 | 0.2885 | [[0, 1, 2, 3], [0, 24, 0, 0, 1], [1, 0, 27, 0, 1], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.079 | 49.98 | 1200 | 0.3615 | 1.0 | 0.96 | 0.9796 | 25 | 0.9643 | 0.9643 | 0.9643 | 28 | 1.0 | 0.9643 | 0.9818 | 28 | 0.8889 | 1.0 | 0.9412 | 16 | 0.9691 | 0.9633 | 0.9721 | 0.9667 | 97 | 0.9714 | 0.9691 | 0.9695 | 97 | 0.3615 | [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 27, 0, 1], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.0733 | 54.16 | 1300 | 0.3891 | 1.0 | 0.96 | 0.9796 | 25 | 0.9643 | 0.9643 | 0.9643 | 28 | 1.0 | 0.9643 | 0.9818 | 28 | 0.8889 | 1.0 | 0.9412 | 16 | 0.9691 | 0.9633 | 0.9721 | 0.9667 | 97 | 0.9714 | 0.9691 | 0.9695 | 97 | 0.3082 | [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 27, 0, 1], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.0962 | 58.33 | 1400 | 0.3620 | 1.0 | 0.96 | 0.9796 | 25 | 0.9643 | 0.9643 | 0.9643 | 28 | 1.0 | 0.9643 | 0.9818 | 28 | 0.8889 | 1.0 | 0.9412 | 16 | 0.9691 | 0.9633 | 0.9721 | 0.9667 | 97 | 0.9714 | 0.9691 | 0.9695 | 97 | 0.2851 | [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 27, 0, 1], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.0628 | 62.49 | 1500 | 0.4084 | 1.0 | 0.96 | 0.9796 | 25 | 0.9630 | 0.9286 | 0.9455 | 28 | 0.9643 | 0.9643 | 0.9643 | 28 | 0.8889 | 1.0 | 0.9412 | 16 | 0.9588 | 0.9540 | 0.9632 | 0.9576 | 97 | 0.9607 | 0.9588 | 0.9590 | 97 | 0.3001 | [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 26, 1, 1], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.0675 | 66.65 | 1600 | 0.4231 | 1.0 | 0.96 | 0.9796 | 25 | 0.9643 | 0.9643 | 0.9643 | 28 | 1.0 | 0.9643 | 0.9818 | 28 | 0.8889 | 1.0 | 0.9412 | 16 | 0.9691 | 0.9633 | 0.9721 | 0.9667 | 97 | 0.9714 | 0.9691 | 0.9695 | 97 | 0.2827 | [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 27, 0, 1], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Davlan/xlm-roberta-base-finetuned-zulu | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Small Slovenian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs sl_si
type: google/fleurs
config: sl_si
split: test
args: sl_si
metrics:
- name: Wer
type: wer
value: 39.37632455343627
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Slovenian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the google/fleurs sl_si dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8831
- Wer: 39.3763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0054 | 22.01 | 250 | 0.8189 | 39.5761 |
| 0.0015 | 45.01 | 500 | 0.8831 | 39.3763 |
| 0.0009 | 68.0 | 750 | 0.9106 | 39.5035 |
| 0.0008 | 90.01 | 1000 | 0.9193 | 39.6549 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
Davlan/xlm-roberta-base-masakhaner | [
"pytorch",
"xlm-roberta",
"token-classification",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-targin-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-targin-final
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6307
- Accuracy: 0.6882
- Precision: 0.6443
- Recall: 0.6384
- F1: 0.6409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 296 | 0.5882 | 0.6854 | 0.6355 | 0.6182 | 0.6226 |
| 0.5995 | 2.0 | 592 | 0.5693 | 0.7015 | 0.6590 | 0.6019 | 0.6030 |
| 0.5995 | 3.0 | 888 | 0.5823 | 0.6882 | 0.6440 | 0.6377 | 0.6403 |
| 0.5299 | 4.0 | 1184 | 0.5968 | 0.6949 | 0.6488 | 0.6340 | 0.6386 |
| 0.5299 | 5.0 | 1480 | 0.6236 | 0.6835 | 0.6430 | 0.6436 | 0.6433 |
| 0.4698 | 6.0 | 1776 | 0.6307 | 0.6882 | 0.6443 | 0.6384 | 0.6409 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Davlan/xlm-roberta-base-ner-hrl | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 760 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="emmashe15/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Davlan/xlm-roberta-base-sadilar-ner | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: 1itai1/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Davlan/xlm-roberta-base-wikiann-ner | [
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 235 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: r123oy
---
### Roy Dreambooth model trained by duja1 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
r123oy (use that on your prompt)

|
Dazai/Ok | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
## Привет! Это модель основанная на ruGPT3-small
Модель опубликована под лицензией MIT!
# Пример кода на Python
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model_name_or_path = "путь до папки с моделью"
tokenizer = GPT2Tokenizer.from_pretrained(model_name_or_path)
model = GPT2LMHeadModel.from_pretrained(model_name_or_path).cuda()
input_ids = tokenizer.encode(message.content, return_tensors="pt").cuda()
out = model.generate(input_ids.cuda(), repetition_penalty=5.0, do_sample=True, top_k=5, top_p=0.95, temperature=1)
generated_text = list(map(tokenizer.decode, out))
print(generated_text[0])
```
# Смешные текста
Пока их нет! Но если ты хочешь добавить смешной случай открывай pull request
|
Ddarkros/Test | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
widget:
- text: "masterpiece, best quality, anime, 1girl, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, garden, looking at viewer"
example_title: "anime 1girl"
- text: "masterpiece, best quality, anime, 1boy, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, garden, looking at viewer"
example_title: "anime 1boy"
---
# This is only a test model, not recommended, please don't use it directly!
Fine-tuned off Stable Diffusion [v2-1_768-nonema-pruned.ckpt](https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-nonema-pruned.ckpt).
 |
DeBERTa/deberta-v2-xxlarge | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- el
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
- whisper-large
- mozilla-foundation/common_voice_11_0
- greek
datasets:
- mozilla-foundation/common_voice_11_0
- google/fleurs
metrics:
- wer
model-index:
- name: whisper-lg-el-intlv-xs-2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 el
type: mozilla-foundation/common_voice_11_0
config: el
split: test
metrics:
- name: Wer
type: wer
value: 9.50037147102526
---
# whisper-lg-el-intlv-xs-2
This model is a fine-tuned version of [farsipal/whisper-lg-el-intlv-xs](https://huggingface.co/farsipal/whisper-lg-el-intlv-xs) on the mozilla-foundation/common_voice_11_0,google/fleurs el,el_gr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2872
- Wer: 9.5004
## Model description
The model was trained on two interleaved datasets for transcription in the Greek language.
## Intended uses & limitations
Transcription in the Greek language
## Training and evaluation data
Training was performed on two interleaved datasets. Testing was performed on common voice 11.0 (el) test only.
## Training procedure
```
--model_name_or_path 'farsipal/whisper-lg-el-intlv-xs' \
--model_revision main \
--do_train True \
--do_eval True \
--use_auth_token False \
--freeze_feature_encoder False \
--freeze_encoder False \
--model_index_name 'whisper-lg-el-intlv-xs-2' \
--dataset_name 'mozilla-foundation/common_voice_11_0,google/fleurs' \
--dataset_config_name 'el,el_gr' \
--train_split_name 'train+validation,train+validation' \
--eval_split_name 'test,-' \
--text_column_name 'sentence,transcription' \
--audio_column_name 'audio,audio' \
--streaming False \
--max_duration_in_seconds 30 \
--do_lower_case False \
--do_remove_punctuation False \
--do_normalize_eval True \
--language greek \
--task transcribe \
--shuffle_buffer_size 500 \
--output_dir './data/finetuningRuns/whisper-lg-el-intlv-xs-2' \
--overwrite_output_dir True \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 4 \
--learning_rate 3.5e-6 \
--dropout 0.15 \
--attention_dropout 0.05 \
--warmup_steps 500 \
--max_steps 5000 \
--eval_steps 1000 \
--gradient_checkpointing True \
--cache_dir '~/.cache' \
--fp16 True \
--evaluation_strategy steps \
--per_device_eval_batch_size 8 \
--predict_with_generate True \
--generation_max_length 225 \
--save_steps 1000 \
--logging_steps 25 \
--report_to tensorboard \
--load_best_model_at_end True \
--metric_for_best_model wer \
--greater_is_better False \
--push_to_hub False \
--dataloader_num_workers 6
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0813 | 2.49 | 1000 | 0.2147 | 10.8284 |
| 0.0379 | 4.98 | 2000 | 0.2439 | 10.0111 |
| 0.0195 | 7.46 | 3000 | 0.2767 | 9.8811 |
| 0.0126 | 9.95 | 4000 | 0.2872 | 9.5004 |
| 0.0103 | 12.44 | 5000 | 0.3021 | 9.6954 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
DeadBeast/korscm-mBERT | [
"pytorch",
"bert",
"text-classification",
"korean",
"dataset:Korean-Sarcasm",
"transformers",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 43 | null | ---
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Small Maori
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs mi_nz
type: google/fleurs
config: mi_nz
split: test
args: mi_nz
metrics:
- name: Wer
type: wer
value: 30.481593707691317
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Maori
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the google/fleurs mi_nz dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7756
- Wer: 30.4816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2693 | 7.02 | 100 | 0.6741 | 35.4845 |
| 0.0084 | 15.01 | 200 | 0.7756 | 30.4816 |
| 0.0029 | 23.0 | 300 | 0.8154 | 31.4744 |
| 0.002 | 30.02 | 400 | 0.8320 | 31.3777 |
| 0.0017 | 38.01 | 500 | 0.8372 | 31.5163 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
DeadBeast/mbert-base-cased-finetuned-bengali-fakenews | [
"pytorch",
"bert",
"text-classification",
"bengali",
"dataset:BanFakeNews",
"transformers",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 37 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1327
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7576
- Rouge1: 0.1327
- Rouge2: 0.0444
- Rougel: 0.1111
- Rougelsum: 0.1111
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 3.0485 | 0.1269 | 0.0387 | 0.1064 | 0.1065 | 19.0 |
| No log | 2.0 | 124 | 2.8371 | 0.1322 | 0.0468 | 0.1114 | 0.1114 | 19.0 |
| No log | 3.0 | 186 | 2.7747 | 0.1335 | 0.0464 | 0.1124 | 0.1123 | 19.0 |
| No log | 4.0 | 248 | 2.7576 | 0.1327 | 0.0444 | 0.1111 | 0.1111 | 19.0 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
DeadBeast/roberta-base-pretrained-mr-2 | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: victormmp1/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DecafNosebleed/ScaraBot | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-12-21T14:28:20Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 368.00 +/- 93.31
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga besa2001 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga besa2001 -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga besa2001
```
## Hyperparameters
```python
OrderedDict([('batch_size', 2048),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Declan/Breitbart_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: wooihen/ppo-HuggyRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Declan/Breitbart_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language:
- code
tags:
- code
- rust
- programming
---
# BLOOM (560M ckpt) fine-tuned on The Stack RUST code
- Latest ckpt: https://huggingface.co/mrm8488/bloom-560m-finetuned-the-stack-rust/tree/100k
## Model 🧠
[BigScience Large Open-science Open-access Multilingual Language Model](https://huggingface.co/bigscience/bloom-560m#model-details) 🌸
BLOOM is an autoregressive Large Language Model (LLM), trained to continue text from a prompt on vast amounts of text data using industrial-scale computational resources. As such, it is able to output coherent text in 46 languages and 13 programming languages that is hardly distinguishable from text written by humans. BLOOM can also be instructed to perform text tasks it hasn't been explicitly trained for, by casting them as text generation tasks.
## Dataset 📚
**Rust** 🦀 part of The [Stack](https://huggingface.co/datasets/bigcode/the-stack).
The Stack contains over 6TB of permissively-licensed source code files covering 358 programming languages. The dataset was created as part of the BigCode Project, an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets.
## Example of usage 👩💻
```py
import torch
from transformers import BloomTokenizerFast, BloomForCausalLM
device = 'cuda' if torch.cuda.is_available() else 'cpu'
ckpt = 'mrm8488/bloom-560m-finetuned-the-stack-rust'
revision = '100k' # latest one at the moment
tokenizer = BloomTokenizerFast.from_pretrained(ckpt)
model = BloomForCausalLM.from_pretrained(ckpt, revision=revision).to(device)
def complete_code(text):
inputs = tokenizer(text, return_tensors='pt')
input_ids = inputs.input_ids.to(device)
attention_mask = inputs.attention_mask.to(device)
output = model.generate(input_ids, attention_mask=attention_mask, max_length=2048, eos_token_id=tokenizer.eos_token_id)
return tokenizer.decode(output[0], skip_special_tokens=False)
code_prompt = """
use fastly::{Error, Request, Response};
use serde_json::{json, Value};
#[fastly::main]
fn main(req: Request) -> Result<Response, Error> {
let mut response = req.send("origin_0")?;
"""
complete_code(code_prompt)
```
## Citation ✒️
```
@misc {manuel_romero_2022,
author = { {Manuel Romero} },
title = { bloom-560m-finetuned-the-stack-rust (Revision 5358462) },
year = 2022,
url = { https://huggingface.co/mrm8488/bloom-560m-finetuned-the-stack-rust },
doi = { 10.57967/hf/0236 },
publisher = { Hugging Face }
}
```
|
Declan/Breitbart_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- en
license: apache-2.0
tags:
- dialogue policy
- task-oriented dialog
datasets:
- ConvLab/multiwoz21
---
# ddpt-policy-sgd
This is a DDPT model (https://aclanthology.org/2022.coling-1.21/) trained on [MultiWOZ 2.1](https://huggingface.co/datasets/ConvLab/multiwoz21)
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- seed: 1
- optimizer: Adam
- num_epochs: 40
- use checkpoint which performed best on validation set
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu111 |
Declan/Breitbart_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Boiler/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Declan/Breitbart_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Boiler/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Declan/Breitbart_model_v7 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2022-12-21T14:43:54Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('AgentXXX/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
Declan/Breitbart_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- wer
model-index:
- name: model_syllable_onSet3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_syllable_onSet3
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1590
- 0 Precision: 0.9688
- 0 Recall: 1.0
- 0 F1-score: 0.9841
- 0 Support: 31
- 1 Precision: 1.0
- 1 Recall: 1.0
- 1 F1-score: 1.0
- 1 Support: 25
- 2 Precision: 1.0
- 2 Recall: 0.9474
- 2 F1-score: 0.9730
- 2 Support: 19
- 3 Precision: 0.9545
- 3 Recall: 0.9545
- 3 F1-score: 0.9545
- 3 Support: 22
- Accuracy: 0.9794
- Macro avg Precision: 0.9808
- Macro avg Recall: 0.9755
- Macro avg F1-score: 0.9779
- Macro avg Support: 97
- Weighted avg Precision: 0.9797
- Weighted avg Recall: 0.9794
- Weighted avg F1-score: 0.9793
- Weighted avg Support: 97
- Wer: 0.2202
- Mtrix: [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 0, 18, 1], [3, 1, 0, 0, 21]]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 70
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | 0 Precision | 0 Recall | 0 F1-score | 0 Support | 1 Precision | 1 Recall | 1 F1-score | 1 Support | 2 Precision | 2 Recall | 2 F1-score | 2 Support | 3 Precision | 3 Recall | 3 F1-score | 3 Support | Accuracy | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Macro avg Support | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score | Weighted avg Support | Wer | Mtrix |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:--------:|:-------------------:|:----------------:|:------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------------:|:--------------------:|:------:|:--------------------------------------------------------------------------------------:|
| 1.642 | 4.16 | 100 | 1.5891 | 1.0 | 0.2581 | 0.4103 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.2135 | 1.0 | 0.3519 | 19 | 0.0 | 0.0 | 0.0 | 22 | 0.2784 | 0.3034 | 0.3145 | 0.1905 | 97 | 0.3614 | 0.2784 | 0.2000 | 97 | 0.9780 | [[0, 1, 2, 3], [0, 8, 0, 23, 0], [1, 0, 0, 25, 0], [2, 0, 0, 19, 0], [3, 0, 0, 22, 0]] |
| 1.4791 | 8.33 | 200 | 1.3227 | 1.0 | 0.2581 | 0.4103 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.2135 | 1.0 | 0.3519 | 19 | 0.0 | 0.0 | 0.0 | 22 | 0.2784 | 0.3034 | 0.3145 | 0.1905 | 97 | 0.3614 | 0.2784 | 0.2000 | 97 | 0.9780 | [[0, 1, 2, 3], [0, 8, 0, 23, 0], [1, 0, 0, 25, 0], [2, 0, 0, 19, 0], [3, 0, 0, 22, 0]] |
| 1.2376 | 12.49 | 300 | 1.0446 | 1.0 | 0.2581 | 0.4103 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.2135 | 1.0 | 0.3519 | 19 | 0.0 | 0.0 | 0.0 | 22 | 0.2784 | 0.3034 | 0.3145 | 0.1905 | 97 | 0.3614 | 0.2784 | 0.2000 | 97 | 0.9780 | [[0, 1, 2, 3], [0, 8, 0, 23, 0], [1, 0, 0, 25, 0], [2, 0, 0, 19, 0], [3, 0, 0, 22, 0]] |
| 0.9622 | 16.65 | 400 | 0.8811 | 1.0 | 0.2581 | 0.4103 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.2135 | 1.0 | 0.3519 | 19 | 0.0 | 0.0 | 0.0 | 22 | 0.2784 | 0.3034 | 0.3145 | 0.1905 | 97 | 0.3614 | 0.2784 | 0.2000 | 97 | 0.9780 | [[0, 1, 2, 3], [0, 8, 0, 23, 0], [1, 0, 0, 25, 0], [2, 0, 0, 19, 0], [3, 0, 0, 22, 0]] |
| 0.8614 | 20.82 | 500 | 0.8174 | 1.0 | 0.2581 | 0.4103 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.2135 | 1.0 | 0.3519 | 19 | 0.0 | 0.0 | 0.0 | 22 | 0.2784 | 0.3034 | 0.3145 | 0.1905 | 97 | 0.3614 | 0.2784 | 0.2000 | 97 | 0.9780 | [[0, 1, 2, 3], [0, 8, 0, 23, 0], [1, 0, 0, 25, 0], [2, 0, 0, 19, 0], [3, 0, 0, 22, 0]] |
| 0.8344 | 24.98 | 600 | 0.7498 | 1.0 | 1.0 | 1.0 | 31 | 1.0 | 1.0 | 1.0 | 25 | 1.0 | 1.0 | 1.0 | 19 | 1.0 | 1.0 | 1.0 | 22 | 1.0 | 1.0 | 1.0 | 1.0 | 97 | 1.0 | 1.0 | 1.0 | 97 | 1.0 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 0, 19, 0], [3, 0, 0, 0, 22]] |
| 0.8105 | 29.16 | 700 | 0.7907 | 0.9688 | 1.0 | 0.9841 | 31 | 1.0 | 0.96 | 0.9796 | 25 | 0.95 | 1.0 | 0.9744 | 19 | 1.0 | 0.9545 | 0.9767 | 22 | 0.9794 | 0.9797 | 0.9786 | 0.9787 | 97 | 0.9802 | 0.9794 | 0.9794 | 97 | 1.0 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 24, 1, 0], [2, 0, 0, 19, 0], [3, 1, 0, 0, 21]] |
| 0.6168 | 33.33 | 800 | 0.5496 | 0.9688 | 1.0 | 0.9841 | 31 | 1.0 | 0.96 | 0.9796 | 25 | 0.95 | 1.0 | 0.9744 | 19 | 1.0 | 0.9545 | 0.9767 | 22 | 0.9794 | 0.9797 | 0.9786 | 0.9787 | 97 | 0.9802 | 0.9794 | 0.9794 | 97 | 0.5840 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 24, 1, 0], [2, 0, 0, 19, 0], [3, 1, 0, 0, 21]] |
| 0.2701 | 37.49 | 900 | 0.2587 | 1.0 | 1.0 | 1.0 | 31 | 1.0 | 0.96 | 0.9796 | 25 | 0.9474 | 0.9474 | 0.9474 | 19 | 0.9565 | 1.0 | 0.9778 | 22 | 0.9794 | 0.9760 | 0.9768 | 0.9762 | 97 | 0.9798 | 0.9794 | 0.9794 | 97 | 0.2375 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 24, 1, 0], [2, 0, 0, 18, 1], [3, 0, 0, 0, 22]] |
| 0.1745 | 41.65 | 1000 | 0.2219 | 0.9688 | 1.0 | 0.9841 | 31 | 1.0 | 1.0 | 1.0 | 25 | 1.0 | 0.9474 | 0.9730 | 19 | 0.9545 | 0.9545 | 0.9545 | 22 | 0.9794 | 0.9808 | 0.9755 | 0.9779 | 97 | 0.9797 | 0.9794 | 0.9793 | 97 | 0.2445 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 0, 18, 1], [3, 1, 0, 0, 21]] |
| 0.1494 | 45.82 | 1100 | 0.2548 | 0.9688 | 1.0 | 0.9841 | 31 | 1.0 | 0.96 | 0.9796 | 25 | 1.0 | 0.9474 | 0.9730 | 19 | 0.9130 | 0.9545 | 0.9333 | 22 | 0.9691 | 0.9704 | 0.9655 | 0.9675 | 97 | 0.9703 | 0.9691 | 0.9693 | 97 | 0.2352 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 24, 0, 1], [2, 0, 0, 18, 1], [3, 1, 0, 0, 21]] |
| 0.1213 | 49.98 | 1200 | 0.1756 | 0.9688 | 1.0 | 0.9841 | 31 | 0.9615 | 1.0 | 0.9804 | 25 | 1.0 | 0.9474 | 0.9730 | 19 | 1.0 | 0.9545 | 0.9767 | 22 | 0.9794 | 0.9826 | 0.9755 | 0.9786 | 97 | 0.9801 | 0.9794 | 0.9793 | 97 | 0.2260 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 1, 18, 0], [3, 1, 0, 0, 21]] |
| 0.0964 | 54.16 | 1300 | 0.1884 | 0.9688 | 1.0 | 0.9841 | 31 | 1.0 | 1.0 | 1.0 | 25 | 1.0 | 0.9474 | 0.9730 | 19 | 0.9545 | 0.9545 | 0.9545 | 22 | 0.9794 | 0.9808 | 0.9755 | 0.9779 | 97 | 0.9797 | 0.9794 | 0.9793 | 97 | 0.2260 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 0, 18, 1], [3, 1, 0, 0, 21]] |
| 0.0859 | 58.33 | 1400 | 0.1212 | 0.9688 | 1.0 | 0.9841 | 31 | 1.0 | 1.0 | 1.0 | 25 | 1.0 | 1.0 | 1.0 | 19 | 1.0 | 0.9545 | 0.9767 | 22 | 0.9897 | 0.9922 | 0.9886 | 0.9902 | 97 | 0.9900 | 0.9897 | 0.9897 | 97 | 0.2202 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 0, 19, 0], [3, 1, 0, 0, 21]] |
| 0.0845 | 62.49 | 1500 | 0.1254 | 0.9688 | 1.0 | 0.9841 | 31 | 1.0 | 1.0 | 1.0 | 25 | 1.0 | 1.0 | 1.0 | 19 | 1.0 | 0.9545 | 0.9767 | 22 | 0.9897 | 0.9922 | 0.9886 | 0.9902 | 97 | 0.9900 | 0.9897 | 0.9897 | 97 | 0.2178 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 0, 19, 0], [3, 1, 0, 0, 21]] |
| 0.0831 | 66.65 | 1600 | 0.1590 | 0.9688 | 1.0 | 0.9841 | 31 | 1.0 | 1.0 | 1.0 | 25 | 1.0 | 0.9474 | 0.9730 | 19 | 0.9545 | 0.9545 | 0.9545 | 22 | 0.9794 | 0.9808 | 0.9755 | 0.9779 | 97 | 0.9797 | 0.9794 | 0.9793 | 97 | 0.2202 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 0, 18, 1], [3, 1, 0, 0, 21]] |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Declan/CNN_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 450
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Declan/NewYorkTimes_model_v3 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- animal
widget:
- text: a photo of britazzleshorg cat at Shakespeare outdoor amphitheater in pet costume
---
# DreamBooth model for the britazzleshorg concept trained by Nlpeva on the Nlpeva/British_shorthair dataset.
This is a Stable Diffusion model fine-tuned on the britazzleshorg concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of britazzleshorg cat**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `cat` images for the animal theme. It is based on ten images of a british shorthair.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('Nlpeva/britazzleshorg-cat')
image = pipeline().images[0]
image
```
|
Declan/Reuters_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
pipeline_tag: object-detection
tags:
- vision
---
# Detection Transformers with Assignment
By [Jeffrey Ouyang-Zhang](https://jozhang97.github.io/), [Jang Hyun Cho](https://sites.google.com/view/janghyuncho/), [Xingyi Zhou](https://www.cs.utexas.edu/~zhouxy/), [Philipp Krähenbühl](http://www.philkr.net/)
From the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137).
**TL; DR.** **De**tection **T**ransformers with **A**ssignment (DETA) re-introduce IoU assignment and NMS for transformer-based detectors. DETA trains and tests comparibly as fast as Deformable-DETR and converges much faster (50.2 mAP in 12 epochs on COCO). |
DeltaHub/adapter_t5-3b_mrpc | [
"pytorch",
"transformers"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-q-learning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="victormmp1/Taxi-v3-q-learning", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DeltaHub/lora_t5-base_mrpc | [
"pytorch",
"transformers"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 606.50 +/- 136.90
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga odiaz1066 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga odiaz1066 -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga odiaz1066
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Denny29/DialoGPT-medium-asunayuuki | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language:
- ba
license: apache-2.0
tags:
- grammatical error correction
---
# Canine-c Bashkir Spelling Correction v1
This model is a version of [google/canine-c](https://huggingface.co/openai/whisper-small) fine-tuned to fix corrupted texts.
It was trained on a mixture of two parallel datasets in the Bashkir language:
- sentences post-edited by humans after OCR
- artificially randomly corrupted sentences along with their original versions
For each character, the model predicts whether to replace it and whether to insert another character next to it.
In this way, the model can be used to fix spelling or OCR errors.
On a held-out set, it reduces the number of required edits by 40%.
## How to use
You can use the model by feeding sentences to the following code:
```Python
import torch
from transformers import CanineTokenizer, CanineForTokenClassification
tokenizer = CanineTokenizer.from_pretrained('slone/canine-c-bashkir-gec-v1')
model = CanineForTokenClassification.from_pretrained('slone/canine-c-bashkir-gec-v1')
if torch.cuda.is_available():
model.cuda()
LABELS_THIS = [c[5:] for c in model.config.id2label.values() if c.startswith('THIS_')]
LABELS_NEXT = [c[5:] for c in model.config.id2label.values() if c.startswith('NEXT_')]
def fix_text(text, boost=0):
"""Apply the model to edit the text. `boost` is a parameter to control edit aggressiveness."""
bx = tokenizer(text, return_tensors='pt', padding=True)
with torch.inference_mode():
out = model(**bx.to(model.device))
n1, n2 = len(LABELS_THIS), len(LABELS_NEXT)
logits1 = out.logits[0, :, :n1].view(-1, n1)
logits2 = out.logits[0, :, n1:].view(-1, n2)
if boost:
logits1[1:, 0] -= boost
logits2[:, 0] -= boost
ids1, ids2 = logits1.argmax(-1).tolist(), logits2.argmax(-1).tolist()
result = []
for c, id1, id2 in zip(' ' + text, ids1, ids2):
l1, l2 = LABELS_THIS[id1], LABELS_NEXT[id2]
if l1 == 'KEEP':
result.append(c)
elif l1 != 'DELETE':
result.append(l1)
if l2 != 'PASS':
result.append(l2)
return ''.join(result)
text = 'У йыл дан д ың йөҙө һoрөмлэнде.'
print(fix_text(text)) # Уйылдандың йөҙө һөрөмләнде.
```
The parameter `boost` can be used to control the aggressiveness of editing:
positive values increase the probability of changing the text, negative values decrease it.
|
Denver/distilbert-base-uncased-finetuned-squad | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- ne
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Large V2 Nepali
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: ne-NP
split: test
args: ne-NP
metrics:
- name: Wer
type: wer
value: 14.634146341463413
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large V2 Nepali
This model is a fine-tuned version of [DrishtiSharma/whisper-large-v2-hi-1e-5-2.5k-steps-v1](https://huggingface.co/DrishtiSharma/whisper-large-v2-hi-1e-5-2.5k-steps-v1) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7447
- Wer: 14.6341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0 | 200.0 | 200 | 0.7447 | 14.6341 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
DeskDown/MarianMixFT_en-hi | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: mit
tags:
- audio
- automatic-speech-recognition
- endpoints-template
library_name: generic
inference: false
---
# OpenAI [Whisper](https://github.com/openai/whisper) Inference Endpoint example
> Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.
For more information about the model, license and limitations check the original repository at [openai/whisper](https://github.com/openai/whisper).
---
This repository implements a custom `handler` task for `automatic-speech-recognition` for 🤗 Inference Endpoints using OpenAIs new Whisper model. The code for the customized pipeline is in the [pipeline.py](https://huggingface.co/philschmid/openai-whisper-endpoint/blob/main/handler.py).
There is also a [notebook](https://huggingface.co/philschmid/openai-whisper-endpoint/blob/main/create_handler.ipynb) included, on how to create the `handler.py`
### Request
The endpoint expects a binary audio file. Below is a cURL example and a Python example using the `requests` library.
**curl**
```bash
# load audio file
wget https://cdn-media.huggingface.co/speech_samples/sample1.flac
# run request
curl --request POST \
--url https://{ENDPOINT}/ \
--header 'Content-Type: audio/x-flac' \
--header 'Authorization: Bearer {HF_TOKEN}' \
--data-binary '@sample1.flac'
```
**Python**
```python
import json
from typing import List
import requests as r
import base64
import mimetypes
ENDPOINT_URL=""
HF_TOKEN=""
def predict(path_to_audio:str=None):
# read audio file
with open(path_to_audio, "rb") as i:
b = i.read()
# get mimetype
content_type= mimetypes.guess_type(path_to_audio)[0]
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": content_type
}
response = r.post(ENDPOINT_URL, headers=headers, data=b)
return response.json()
prediction = predict(path_to_audio="sample1.flac")
prediction
```
expected output
```json
{"text": " going along slushy country roads and speaking to damp audiences in draughty school rooms day after day for a fortnight. He'll have to put in an appearance at some place of worship on Sunday morning, and he can come to us immediately afterwards."}
```
|
DeskDown/MarianMixFT_en-id | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
title: Pet classifier!
emoji: 🐶
colorFrom: pink
colorTo: blue
sdk: gradio
sdk_version: 3.1.1
app_file: app.py
pinned: true
license: apache-2.0
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference |
DeskDown/MarianMixFT_en-ja | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-hate-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-hate-final
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6212
- Accuracy: 0.7253
- Precision: 0.7207
- Recall: 0.7253
- F1: 0.7206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 296 | 0.5760 | 0.7025 | 0.7053 | 0.7025 | 0.6771 |
| 0.569 | 2.0 | 592 | 0.5629 | 0.7215 | 0.7168 | 0.7215 | 0.7122 |
| 0.569 | 3.0 | 888 | 0.5616 | 0.7310 | 0.7274 | 0.7310 | 0.7215 |
| 0.4683 | 4.0 | 1184 | 0.5651 | 0.7338 | 0.7295 | 0.7338 | 0.7274 |
| 0.4683 | 5.0 | 1480 | 0.5898 | 0.7338 | 0.7305 | 0.7338 | 0.7246 |
| 0.4086 | 6.0 | 1776 | 0.6212 | 0.7253 | 0.7207 | 0.7253 | 0.7206 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
DeskDown/MarianMix_en-zh-10 | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 ****
This is a trained model of a **Q-Learning** agent playing **** .
## Usage
```python
model = load_from_hub(repo_id="PabloTa/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DeskDown/MarianMix_en-zh_to_vi-ms-hi-ja | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
model_file: middle_dutch_passAgg.pkl
widget:
structuredData:
x0:
- 0.0
- 0.0
- 0.0
x1:
- 0.0
- 0.0
- 0.0
x10:
- 0.0
- 0.0
- 0.0
x100:
- 0.0
- 0.0
- 0.0
x1000:
- 0.0
- 0.0
- 0.0
x1001:
- 0.0
- 0.0
- 0.0
x1002:
- 0.0
- 0.0
- 0.0
x1003:
- 0.0
- 0.0
- 0.0
x1004:
- 0.0
- 0.0
- 0.0
x1005:
- 0.0
- 0.0
- 0.0
x1006:
- 0.0
- 0.0
- 0.0
x1007:
- 0.0
- 0.0
- 0.0
x1008:
- 0.0
- 0.0
- 0.0
x1009:
- 0.0
- 0.0
- 0.0
x101:
- 0.0
- 0.0
- 0.0
x1010:
- 0.0
- 0.0
- 0.0
x1011:
- 0.0
- 0.0
- 0.0
x1012:
- 0.0
- 0.0
- 0.0
x1013:
- 0.0
- 0.0
- 0.0
x1014:
- 0.0
- 0.0
- 0.0
x1015:
- 0.0
- 0.0
- 0.0
x1016:
- 0.0
- 0.0
- 0.0
x1017:
- 0.0
- 0.0
- 0.0
x1018:
- 0.0
- 0.0
- 0.0
x1019:
- 0.0
- 0.0
- 0.0
x102:
- 0.0
- 0.0
- 0.0
x1020:
- 0.0
- 0.0
- 0.0
x1021:
- 0.0
- 0.0
- 0.0
x1022:
- 0.0
- 0.0
- 0.0
x1023:
- 0.0
- 0.0
- 0.0
x1024:
- 0.0
- 0.0
- 0.0
x1025:
- 0.0
- 0.0
- 0.0
x1026:
- 0.0
- 0.0
- 0.0
x1027:
- 0.0
- 0.0
- 0.0
x1028:
- 0.0
- 0.0
- 0.0
x1029:
- 0.0
- 0.0
- 0.0
x103:
- 0.0
- 0.0
- 0.0
x1030:
- 0.0
- 0.0
- 0.0
x1031:
- 0.0
- 0.0
- 0.0
x1032:
- 0.0
- 0.0
- 0.0
x1033:
- 0.0
- 0.0
- 0.0
x1034:
- 0.0
- 0.0
- 0.0
x1035:
- 0.0
- 0.0
- 0.0
x1036:
- 0.0
- 0.0
- 0.0
x1037:
- 0.0
- 0.0
- 0.0
x1038:
- 0.0
- 0.0
- 0.0
x1039:
- 0.0
- 0.0
- 0.0
x104:
- 0.0
- 0.0
- 0.0
x1040:
- 0.0
- 0.0
- 0.0
x1041:
- 0.0
- 0.0
- 0.0
x1042:
- 0.0
- 0.0
- 0.0
x1043:
- 0.0
- 0.0
- 0.0
x1044:
- 0.0
- 0.0
- 0.0
x1045:
- 0.0
- 0.0
- 0.0
x1046:
- 0.0
- 0.0
- 0.0
x1047:
- 0.0
- 0.0
- 0.0
x1048:
- 0.0
- 0.0
- 0.0
x1049:
- 0.0
- 0.0
- 0.0
x105:
- 0.0
- 0.0
- 0.0
x1050:
- 0.0
- 0.0
- 0.0
x1051:
- 0.0
- 0.0
- 0.0
x1052:
- 0.0
- 0.0
- 0.0
x1053:
- 0.0
- 0.0
- 0.0
x1054:
- 0.0
- 0.0
- 0.0
x1055:
- 0.0
- 0.0
- 0.0
x1056:
- 0.0
- 0.0
- 0.0
x1057:
- 0.0
- 0.0
- 0.0
x1058:
- 0.0
- 0.0
- 0.0
x1059:
- 0.0
- 0.0
- 0.0
x106:
- 0.0
- 0.0
- 0.0
x1060:
- 0.0
- 0.0
- 0.0
x1061:
- 0.0
- 0.0
- 0.0
x1062:
- 0.0
- 0.0
- 0.0
x1063:
- 0.0
- 0.0
- 0.0
x1064:
- 0.0
- 0.0
- 0.0
x1065:
- 0.0
- 0.0
- 0.0
x1066:
- 0.0
- 0.0
- 0.0
x1067:
- 0.0
- 0.0
- 0.0
x1068:
- 0.0
- 0.0
- 0.0
x1069:
- 0.0
- 0.0
- 0.0
x107:
- 0.0
- 0.0
- 0.0
x1070:
- 0.0
- 0.0
- 0.0
x1071:
- 0.0
- 0.0
- 0.0
x1072:
- 0.0
- 0.0
- 0.0
x1073:
- 0.0
- 0.0
- 0.0
x1074:
- 0.0
- 0.0
- 0.0
x1075:
- 0.0
- 0.0
- 0.0
x1076:
- 0.0
- 0.0
- 0.0
x1077:
- 0.0
- 0.0
- 0.0
x1078:
- 0.0
- 0.0
- 0.0
x1079:
- 0.0
- 0.0
- 0.0
x108:
- 0.0
- 0.0
- 0.0
x1080:
- 0.0
- 0.0
- 0.0
x1081:
- 0.0
- 0.0
- 0.0
x1082:
- 0.0
- 0.0
- 0.0
x1083:
- 0.0
- 0.0
- 0.0
x1084:
- 0.0
- 0.0
- 0.0
x1085:
- 0.0
- 0.0
- 0.0
x1086:
- 0.0
- 0.0
- 0.0
x1087:
- 0.0
- 0.0
- 0.0
x1088:
- 0.0
- 0.0
- 0.0
x1089:
- 0.0
- 0.0
- 0.0
x109:
- 0.0
- 0.0
- 0.0
x1090:
- 0.0
- 0.0
- 0.0
x1091:
- 0.0
- 0.0
- 0.0
x1092:
- 0.0
- 0.0
- 0.0
x1093:
- 0.0
- 0.0
- 0.0
x1094:
- 0.0
- 0.0
- 0.0
x1095:
- 0.0
- 0.0
- 0.0
x1096:
- 0.0
- 0.0
- 0.0
x1097:
- 0.0
- 0.0
- 0.0
x1098:
- 0.0
- 0.0
- 0.0
x1099:
- 0.0
- 0.0
- 0.0
x11:
- 0.0
- 0.0
- 0.0
x110:
- 0.0
- 0.0
- 0.0
x1100:
- 0.0
- 0.0
- 0.0
x1101:
- 0.0
- 0.0
- 0.0
x1102:
- 0.0
- 0.0
- 0.0
x1103:
- 0.0
- 0.0
- 0.0
x1104:
- 0.0
- 0.0
- 0.0
x1105:
- 0.0
- 0.0
- 0.0
x1106:
- 0.0
- 0.0
- 0.0
x1107:
- 0.0
- 0.0
- 0.0
x1108:
- 0.0
- 0.0
- 0.0
x1109:
- 0.0
- 0.0
- 0.0
x111:
- 0.0
- 0.0
- 0.0
x1110:
- 0.0
- 0.0
- 0.0
x1111:
- 0.0
- 0.0
- 0.0
x1112:
- 0.0
- 0.0
- 0.0
x1113:
- 0.0
- 0.0
- 0.0
x1114:
- 0.0
- 0.0
- 0.0
x1115:
- 0.0
- 0.0
- 0.0
x1116:
- 0.0
- 0.0
- 0.0
x1117:
- 0.0
- 0.0
- 0.0
x1118:
- 0.0
- 0.0
- 0.0
x1119:
- 0.0
- 0.0
- 0.0
x112:
- 0.0
- 0.0
- 0.0
x1120:
- 0.0
- 0.0
- 0.0
x1121:
- 0.0
- 0.0
- 0.0
x1122:
- 0.0
- 0.0
- 0.0
x1123:
- 0.0
- 0.0
- 0.0
x1124:
- 0.0
- 0.0
- 0.0
x1125:
- 0.0
- 0.0
- 0.0
x1126:
- 0.0
- 0.0
- 0.0
x1127:
- 0.0
- 0.0
- 0.0
x1128:
- 0.0
- 0.0
- 0.0
x1129:
- 0.0
- 0.0
- 0.0
x113:
- 1.0
- 0.0
- 0.0
x1130:
- 0.0
- 0.0
- 0.0
x1131:
- 0.0
- 0.0
- 0.0
x1132:
- 0.0
- 0.0
- 0.0
x1133:
- 0.0
- 0.0
- 0.0
x1134:
- 0.0
- 0.0
- 0.0
x1135:
- 0.0
- 0.0
- 0.0
x1136:
- 0.0
- 0.0
- 0.0
x1137:
- 0.0
- 0.0
- 0.0
x1138:
- 0.0
- 0.0
- 0.0
x1139:
- 0.0
- 0.0
- 0.0
x114:
- 0.0
- 0.0
- 0.0
x1140:
- 0.0
- 0.0
- 0.0
x1141:
- 0.0
- 0.0
- 0.0
x1142:
- 0.0
- 0.0
- 0.0
x1143:
- 0.0
- 0.0
- 0.0
x1144:
- 0.0
- 0.0
- 0.0
x1145:
- 0.0
- 0.0
- 0.0
x1146:
- 0.0
- 0.0
- 0.0
x1147:
- 0.0
- 0.0
- 0.0
x1148:
- 0.0
- 0.0
- 0.0
x1149:
- 0.0
- 0.0
- 0.0
x115:
- 0.0
- 0.0
- 0.0
x1150:
- 0.0
- 0.0
- 0.0
x1151:
- 0.0
- 0.0
- 0.0
x1152:
- 0.0
- 0.0
- 0.0
x1153:
- 0.0
- 0.0
- 0.0
x1154:
- 0.0
- 0.0
- 0.0
x1155:
- 0.0
- 0.0
- 0.0
x1156:
- 0.0
- 0.0
- 0.0
x1157:
- 0.0
- 0.0
- 0.0
x1158:
- 0.0
- 0.0
- 0.0
x1159:
- 0.0
- 0.0
- 0.0
x116:
- 0.0
- 0.0
- 0.0
x1160:
- 0.0
- 0.0
- 0.0
x1161:
- 0.0
- 0.0
- 0.0
x1162:
- 0.0
- 0.0
- 0.0
x1163:
- 0.0
- 0.0
- 0.0
x1164:
- 0.0
- 0.0
- 0.0
x1165:
- 0.0
- 0.0
- 0.0
x1166:
- 0.0
- 0.0
- 0.0
x1167:
- 0.0
- 0.0
- 0.0
x1168:
- 0.0
- 0.0
- 0.0
x1169:
- 0.0
- 0.0
- 0.0
x117:
- 0.0
- 0.0
- 0.0
x1170:
- 0.0
- 0.0
- 0.0
x1171:
- 0.0
- 0.0
- 0.0
x1172:
- 0.0
- 0.0
- 0.0
x1173:
- 0.0
- 0.0
- 0.0
x1174:
- 0.0
- 0.0
- 0.0
x1175:
- 0.0
- 0.0
- 0.0
x1176:
- 0.0
- 0.0
- 0.0
x1177:
- 0.0
- 0.0
- 0.0
x1178:
- 0.0
- 0.0
- 0.0
x1179:
- 0.0
- 0.0
- 0.0
x118:
- 0.0
- 0.0
- 0.0
x1180:
- 0.0
- 0.0
- 0.0
x1181:
- 0.0
- 0.0
- 0.0
x1182:
- 0.0
- 0.0
- 0.0
x1183:
- 0.0
- 0.0
- 0.0
x1184:
- 0.0
- 0.0
- 0.0
x1185:
- 0.0
- 0.0
- 0.0
x1186:
- 0.0
- 0.0
- 0.0
x1187:
- 0.0
- 0.0
- 0.0
x1188:
- 0.0
- 0.0
- 0.0
x1189:
- 0.0
- 0.0
- 0.0
x119:
- 0.0
- 0.0
- 0.0
x1190:
- 0.0
- 0.0
- 0.0
x1191:
- 0.0
- 0.0
- 0.0
x1192:
- 0.0
- 0.0
- 0.0
x1193:
- 0.0
- 0.0
- 0.0
x1194:
- 0.0
- 0.0
- 0.0
x1195:
- 0.0
- 0.0
- 0.0
x1196:
- 0.0
- 0.0
- 0.0
x1197:
- 0.0
- 0.0
- 0.0
x1198:
- 0.0
- 0.0
- 0.0
x1199:
- 0.0
- 0.0
- 0.0
x12:
- 0.0
- 0.0
- 0.0
x120:
- 0.0
- 0.0
- 0.0
x1200:
- 0.0
- 0.0
- 0.0
x1201:
- 0.0
- 0.0
- 0.0
x1202:
- 0.0
- 0.0
- 0.0
x1203:
- 0.0
- 0.0
- 0.0
x1204:
- 0.0
- 0.0
- 0.0
x1205:
- 0.0
- 0.0
- 0.0
x1206:
- 0.0
- 0.0
- 0.0
x1207:
- 0.0
- 0.0
- 0.0
x1208:
- 0.0
- 0.0
- 0.0
x1209:
- 0.0
- 0.0
- 0.0
x121:
- 0.0
- 0.0
- 0.0
x1210:
- 0.0
- 0.0
- 0.0
x1211:
- 0.0
- 0.0
- 0.0
x1212:
- 0.0
- 0.0
- 0.0
x1213:
- 0.0
- 0.0
- 0.0
x1214:
- 0.0
- 0.0
- 0.0
x1215:
- 0.0
- 0.0
- 0.0
x1216:
- 0.0
- 0.0
- 0.0
x1217:
- 0.0
- 0.0
- 0.0
x1218:
- 0.0
- 0.0
- 0.0
x1219:
- 0.0
- 0.0
- 0.0
x122:
- 0.0
- 0.0
- 0.0
x1220:
- 0.0
- 0.0
- 0.0
x1221:
- 0.0
- 0.0
- 0.0
x1222:
- 0.0
- 0.0
- 0.0
x1223:
- 0.0
- 0.0
- 0.0
x1224:
- 0.0
- 0.0
- 0.0
x1225:
- 0.0
- 0.0
- 0.0
x1226:
- 0.0
- 0.0
- 0.0
x1227:
- 0.0
- 0.0
- 0.0
x1228:
- 0.0
- 0.0
- 0.0
x1229:
- 0.0
- 0.0
- 0.0
x123:
- 0.0
- 0.0
- 0.0
x1230:
- 0.0
- 0.0
- 0.0
x1231:
- 0.0
- 0.0
- 0.0
x1232:
- 0.0
- 0.0
- 0.0
x1233:
- 0.0
- 0.0
- 0.0
x1234:
- 0.0
- 0.0
- 0.0
x1235:
- 0.0
- 0.0
- 0.0
x1236:
- 0.0
- 0.0
- 0.0
x1237:
- 0.0
- 0.0
- 0.0
x1238:
- 0.0
- 0.0
- 0.0
x1239:
- 0.0
- 0.0
- 0.0
x124:
- 0.0
- 0.0
- 0.0
x1240:
- 0.0
- 0.0
- 0.0
x1241:
- 0.0
- 0.0
- 0.0
x1242:
- 0.0
- 0.0
- 0.0
x1243:
- 0.0
- 0.0
- 0.0
x1244:
- 0.0
- 0.0
- 0.0
x1245:
- 0.0
- 0.0
- 0.0
x1246:
- 0.0
- 0.0
- 0.0
x1247:
- 0.0
- 0.0
- 0.0
x1248:
- 0.0
- 0.0
- 0.0
x1249:
- 0.0
- 0.0
- 0.0
x125:
- 0.0
- 0.0
- 0.0
x1250:
- 0.0
- 0.0
- 0.0
x1251:
- 0.0
- 0.0
- 0.0
x1252:
- 0.0
- 0.0
- 0.0
x1253:
- 0.0
- 0.0
- 0.0
x1254:
- 0.0
- 0.0
- 0.0
x1255:
- 0.0
- 0.0
- 0.0
x1256:
- 0.0
- 0.0
- 0.0
x1257:
- 0.0
- 0.0
- 0.0
x1258:
- 0.0
- 0.0
- 0.0
x1259:
- 0.0
- 0.0
- 0.0
x126:
- 0.0
- 0.0
- 0.0
x1260:
- 0.0
- 0.0
- 0.0
x1261:
- 0.0
- 0.0
- 0.0
x1262:
- 0.0
- 0.0
- 0.0
x1263:
- 0.0
- 0.0
- 0.0
x1264:
- 0.0
- 0.0
- 0.0
x1265:
- 0.0
- 0.0
- 0.0
x1266:
- 0.0
- 0.0
- 0.0
x1267:
- 0.0
- 0.0
- 0.0
x1268:
- 0.0
- 0.0
- 0.0
x1269:
- 0.0
- 0.0
- 0.0
x127:
- 0.0
- 0.0
- 0.0
x1270:
- 0.0
- 0.0
- 0.0
x1271:
- 0.0
- 0.0
- 0.0
x1272:
- 0.0
- 0.0
- 0.0
x1273:
- 0.0
- 0.0
- 0.0
x1274:
- 0.0
- 0.0
- 0.0
x1275:
- 0.0
- 0.0
- 0.0
x1276:
- 0.0
- 0.0
- 0.0
x1277:
- 0.0
- 0.0
- 0.0
x1278:
- 0.0
- 0.0
- 0.0
x1279:
- 0.0
- 0.0
- 0.0
x128:
- 0.0
- 0.0
- 0.0
x1280:
- 0.0
- 0.0
- 0.0
x1281:
- 0.0
- 0.0
- 0.0
x1282:
- 0.0
- 0.0
- 0.0
x1283:
- 0.0
- 0.0
- 0.0
x1284:
- 0.0
- 0.0
- 0.0
x1285:
- 0.0
- 0.0
- 0.0
x1286:
- 0.0
- 0.0
- 0.0
x1287:
- 0.0
- 0.0
- 0.0
x1288:
- 0.0
- 0.0
- 0.0
x1289:
- 0.0
- 0.0
- 0.0
x129:
- 0.0
- 0.0
- 0.0
x1290:
- 0.0
- 0.0
- 0.0
x1291:
- 0.0
- 0.0
- 0.0
x1292:
- 0.0
- 0.0
- 0.0
x1293:
- 0.0
- 0.0
- 0.0
x1294:
- 0.0
- 0.0
- 0.0
x1295:
- 0.0
- 0.0
- 0.0
x1296:
- 0.0
- 0.0
- 0.0
x1297:
- 0.0
- 0.0
- 0.0
x1298:
- 0.0
- 0.0
- 0.0
x1299:
- 0.0
- 0.0
- 0.0
x13:
- 0.0
- 0.0
- 0.0
x130:
- 0.0
- 0.0
- 0.0
x1300:
- 0.0
- 0.0
- 0.0
x1301:
- 0.0
- 0.0
- 0.0
x1302:
- 0.0
- 0.0
- 0.0
x1303:
- 0.0
- 0.0
- 0.0
x1304:
- 0.0
- 0.0
- 0.0
x1305:
- 0.0
- 0.0
- 0.0
x1306:
- 0.0
- 0.0
- 0.0
x1307:
- 0.0
- 0.0
- 0.0
x1308:
- 0.0
- 0.0
- 0.0
x1309:
- 0.0
- 0.0
- 0.0
x131:
- 0.0
- 0.0
- 0.0
x1310:
- 0.0
- 0.0
- 0.0
x1311:
- 0.0
- 0.0
- 0.0
x1312:
- 0.0
- 0.0
- 0.0
x1313:
- 0.0
- 0.0
- 0.0
x1314:
- 0.0
- 0.0
- 0.0
x1315:
- 0.0
- 0.0
- 0.0
x1316:
- 0.0
- 0.0
- 0.0
x1317:
- 0.0
- 0.0
- 0.0
x1318:
- 0.0
- 0.0
- 0.0
x1319:
- 0.0
- 0.0
- 0.0
x132:
- 0.0
- 0.0
- 0.0
x1320:
- 0.0
- 0.0
- 0.0
x1321:
- 0.0
- 0.0
- 0.0
x1322:
- 0.0
- 0.0
- 0.0
x1323:
- 0.0
- 0.0
- 0.0
x1324:
- 0.0
- 0.0
- 0.0
x1325:
- 0.0
- 0.0
- 0.0
x1326:
- 0.0
- 0.0
- 0.0
x1327:
- 0.0
- 0.0
- 0.0
x1328:
- 0.0
- 0.0
- 0.0
x1329:
- 0.0
- 0.0
- 0.0
x133:
- 0.0
- 0.0
- 0.0
x1330:
- 0.0
- 0.0
- 0.0
x1331:
- 0.0
- 0.0
- 0.0
x1332:
- 0.0
- 0.0
- 0.0
x1333:
- 0.0
- 0.0
- 0.0
x1334:
- 0.0
- 0.0
- 0.0
x1335:
- 0.0
- 0.0
- 0.0
x1336:
- 0.0
- 0.0
- 0.0
x1337:
- 0.0
- 0.0
- 0.0
x1338:
- 0.0
- 0.0
- 0.0
x1339:
- 0.0
- 0.0
- 0.0
x134:
- 0.0
- 0.0
- 0.0
x1340:
- 0.0
- 0.0
- 0.0
x1341:
- 0.0
- 0.0
- 0.0
x1342:
- 0.0
- 0.0
- 0.0
x1343:
- 0.0
- 0.0
- 0.0
x1344:
- 0.0
- 0.0
- 0.0
x1345:
- 0.0
- 0.0
- 0.0
x1346:
- 0.0
- 0.0
- 0.0
x1347:
- 0.0
- 0.0
- 0.0
x1348:
- 0.0
- 0.0
- 0.0
x1349:
- 0.0
- 0.0
- 0.0
x135:
- 0.0
- 0.0
- 0.0
x1350:
- 0.0
- 0.0
- 0.0
x1351:
- 0.0
- 0.0
- 0.0
x1352:
- 0.0
- 0.0
- 0.0
x1353:
- 0.0
- 0.0
- 0.0
x1354:
- 0.0
- 0.0
- 0.0
x1355:
- 0.0
- 0.0
- 0.0
x1356:
- 0.0
- 0.0
- 0.0
x1357:
- 0.0
- 0.0
- 0.0
x1358:
- 0.0
- 0.0
- 0.0
x1359:
- 0.0
- 0.0
- 0.0
x136:
- 0.0
- 0.0
- 0.0
x1360:
- 0.0
- 0.0
- 0.0
x1361:
- 0.0
- 0.0
- 0.0
x1362:
- 0.0
- 0.0
- 0.0
x1363:
- 0.0
- 0.0
- 0.0
x1364:
- 0.0
- 0.0
- 0.0
x1365:
- 0.0
- 0.0
- 0.0
x1366:
- 0.0
- 0.0
- 0.0
x1367:
- 0.0
- 0.0
- 0.0
x1368:
- 0.0
- 0.0
- 0.0
x1369:
- 0.0
- 0.0
- 0.0
x137:
- 0.0
- 0.0
- 0.0
x1370:
- 0.0
- 0.0
- 0.0
x1371:
- 0.0
- 0.0
- 0.0
x1372:
- 0.0
- 0.0
- 0.0
x1373:
- 0.0
- 0.0
- 0.0
x1374:
- 0.0
- 0.0
- 0.0
x1375:
- 0.0
- 0.0
- 0.0
x1376:
- 0.0
- 0.0
- 0.0
x1377:
- 0.0
- 0.0
- 0.0
x1378:
- 0.0
- 0.0
- 0.0
x1379:
- 0.0
- 0.0
- 0.0
x138:
- 0.0
- 0.0
- 0.0
x1380:
- 0.0
- 0.0
- 0.0
x1381:
- 0.0
- 0.0
- 0.0
x1382:
- 0.0
- 0.0
- 0.0
x1383:
- 0.0
- 0.0
- 0.0
x1384:
- 0.0
- 0.0
- 0.0
x1385:
- 0.0
- 0.0
- 0.0
x1386:
- 0.0
- 0.0
- 0.0
x1387:
- 0.0
- 0.0
- 0.0
x1388:
- 0.0
- 0.0
- 0.0
x1389:
- 0.0
- 0.0
- 0.0
x139:
- 0.0
- 0.0
- 0.0
x1390:
- 0.0
- 0.0
- 0.0
x1391:
- 0.0
- 0.0
- 0.0
x1392:
- 0.0
- 0.0
- 0.0
x1393:
- 0.0
- 0.0
- 0.0
x1394:
- 0.0
- 0.0
- 0.0
x1395:
- 0.0
- 0.0
- 0.0
x1396:
- 0.0
- 0.0
- 0.0
x1397:
- 0.0
- 0.0
- 0.0
x1398:
- 0.0
- 0.0
- 0.0
x1399:
- 0.0
- 0.0
- 0.0
x14:
- 0.0
- 0.0
- 0.0
x140:
- 0.0
- 0.0
- 0.0
x1400:
- 0.0
- 0.0
- 0.0
x1401:
- 0.0
- 0.0
- 0.0
x1402:
- 0.0
- 0.0
- 0.0
x1403:
- 0.0
- 0.0
- 0.0
x1404:
- 0.0
- 0.0
- 0.0
x1405:
- 0.0
- 0.0
- 0.0
x1406:
- 0.0
- 0.0
- 0.0
x1407:
- 0.0
- 0.0
- 0.0
x1408:
- 0.0
- 0.0
- 0.0
x1409:
- 0.0
- 0.0
- 0.0
x141:
- 0.0
- 0.0
- 0.0
x1410:
- 0.0
- 0.0
- 0.0
x1411:
- 0.0
- 0.0
- 0.0
x1412:
- 0.0
- 0.0
- 0.0
x1413:
- 0.0
- 0.0
- 0.0
x1414:
- 0.0
- 0.0
- 0.0
x1415:
- 0.0
- 0.0
- 0.0
x1416:
- 0.0
- 0.0
- 0.0
x1417:
- 0.0
- 0.0
- 0.0
x1418:
- 0.0
- 0.0
- 0.0
x1419:
- 0.0
- 0.0
- 0.0
x142:
- 0.0
- 0.0
- 0.0
x1420:
- 0.0
- 0.0
- 0.0
x1421:
- 0.0
- 0.0
- 0.0
x1422:
- 0.0
- 0.0
- 0.0
x1423:
- 0.0
- 0.0
- 0.0
x1424:
- 0.0
- 0.0
- 0.0
x1425:
- 0.0
- 0.0
- 0.0
x1426:
- 0.0
- 0.0
- 0.0
x1427:
- 0.0
- 0.0
- 0.0
x1428:
- 0.0
- 0.0
- 0.0
x1429:
- 0.0
- 0.0
- 0.0
x143:
- 0.0
- 0.0
- 0.0
x1430:
- 0.0
- 0.0
- 0.0
x1431:
- 0.0
- 0.0
- 0.0
x1432:
- 0.0
- 0.0
- 0.0
x1433:
- 0.0
- 0.0
- 0.0
x1434:
- 0.0
- 0.0
- 0.0
x1435:
- 0.0
- 0.0
- 0.0
x1436:
- 0.0
- 0.0
- 0.0
x1437:
- 0.0
- 0.0
- 0.0
x1438:
- 0.0
- 0.0
- 0.0
x1439:
- 0.0
- 0.0
- 0.0
x144:
- 0.0
- 0.0
- 0.0
x1440:
- 0.0
- 0.0
- 0.0
x1441:
- 0.0
- 0.0
- 0.0
x1442:
- 0.0
- 0.0
- 0.0
x1443:
- 0.0
- 0.0
- 0.0
x1444:
- 0.0
- 0.0
- 0.0
x1445:
- 0.0
- 0.0
- 0.0
x1446:
- 0.0
- 0.0
- 0.0
x1447:
- 0.0
- 0.0
- 0.0
x1448:
- 0.0
- 0.0
- 0.0
x1449:
- 0.0
- 0.0
- 0.0
x145:
- 0.0
- 0.0
- 0.0
x1450:
- 0.0
- 0.0
- 0.0
x1451:
- 0.0
- 0.0
- 0.0
x1452:
- 0.0
- 0.0
- 0.0
x1453:
- 0.0
- 0.0
- 0.0
x1454:
- 0.0
- 0.0
- 0.0
x1455:
- 0.0
- 0.0
- 0.0
x1456:
- 0.0
- 0.0
- 0.0
x1457:
- 0.0
- 0.0
- 0.0
x1458:
- 0.0
- 0.0
- 0.0
x1459:
- 0.0
- 0.0
- 0.0
x146:
- 0.0
- 0.0
- 0.0
x1460:
- 0.0
- 0.0
- 0.0
x1461:
- 0.0
- 0.0
- 0.0
x1462:
- 0.0
- 0.0
- 0.0
x1463:
- 0.0
- 0.0
- 0.0
x1464:
- 0.0
- 0.0
- 0.0
x1465:
- 0.0
- 0.0
- 0.0
x1466:
- 0.0
- 0.0
- 0.0
x1467:
- 0.0
- 0.0
- 0.0
x1468:
- 0.0
- 0.0
- 0.0
x1469:
- 0.0
- 0.0
- 0.0
x147:
- 0.0
- 0.0
- 0.0
x1470:
- 0.0
- 0.0
- 0.0
x1471:
- 0.0
- 0.0
- 0.0
x1472:
- 0.0
- 0.0
- 0.0
x1473:
- 0.0
- 0.0
- 0.0
x1474:
- 0.0
- 0.0
- 0.0
x1475:
- 0.0
- 0.0
- 0.0
x1476:
- 0.0
- 0.0
- 0.0
x1477:
- 0.0
- 0.0
- 0.0
x1478:
- 0.0
- 0.0
- 0.0
x1479:
- 0.0
- 0.0
- 0.0
x148:
- 0.0
- 0.0
- 0.0
x1480:
- 0.0
- 0.0
- 0.0
x1481:
- 0.0
- 0.0
- 0.0
x1482:
- 0.0
- 0.0
- 0.0
x1483:
- 0.0
- 0.0
- 0.0
x1484:
- 0.0
- 0.0
- 0.0
x1485:
- 0.0
- 0.0
- 0.0
x1486:
- 0.0
- 0.0
- 0.0
x1487:
- 0.0
- 0.0
- 0.0
x1488:
- 0.0
- 0.0
- 0.0
x1489:
- 0.0
- 0.0
- 0.0
x149:
- 0.0
- 0.0
- 0.0
x1490:
- 0.0
- 0.0
- 0.0
x1491:
- 0.0
- 0.0
- 0.0
x1492:
- 0.0
- 0.0
- 0.0
x1493:
- 0.0
- 0.0
- 0.0
x1494:
- 0.0
- 0.0
- 0.0
x1495:
- 0.0
- 0.0
- 0.0
x1496:
- 0.0
- 0.0
- 0.0
x1497:
- 0.0
- 0.0
- 0.0
x1498:
- 0.0
- 0.0
- 0.0
x1499:
- 0.0
- 0.0
- 0.0
x15:
- 0.0
- 0.0
- 0.0
x150:
- 0.0
- 0.0
- 0.0
x1500:
- 0.0
- 0.0
- 0.0
x1501:
- 0.0
- 0.0
- 0.0
x1502:
- 0.0
- 0.0
- 0.0
x1503:
- 0.0
- 0.0
- 0.0
x1504:
- 0.0
- 0.0
- 0.0
x1505:
- 0.0
- 0.0
- 0.0
x1506:
- 0.0
- 0.0
- 0.0
x1507:
- 0.0
- 0.0
- 0.0
x1508:
- 0.0
- 0.0
- 0.0
x1509:
- 0.0
- 0.0
- 0.0
x151:
- 0.0
- 0.0
- 0.0
x1510:
- 0.0
- 0.0
- 0.0
x1511:
- 0.0
- 0.0
- 0.0
x1512:
- 0.0
- 0.0
- 0.0
x1513:
- 0.0
- 0.0
- 0.0
x1514:
- 0.0
- 0.0
- 0.0
x1515:
- 0.0
- 0.0
- 0.0
x1516:
- 0.0
- 0.0
- 0.0
x1517:
- 0.0
- 0.0
- 0.0
x1518:
- 0.0
- 0.0
- 0.0
x1519:
- 0.0
- 0.0
- 0.0
x152:
- 0.0
- 0.0
- 0.0
x1520:
- 0.0
- 0.0
- 0.0
x1521:
- 0.0
- 0.0
- 0.0
x1522:
- 0.0
- 0.0
- 0.0
x1523:
- 0.0
- 0.0
- 0.0
x1524:
- 0.0
- 0.0
- 0.0
x1525:
- 0.0
- 0.0
- 0.0
x1526:
- 0.0
- 0.0
- 0.0
x1527:
- 0.0
- 0.0
- 0.0
x1528:
- 0.0
- 0.0
- 0.0
x1529:
- 0.0
- 0.0
- 0.0
x153:
- 0.0
- 0.0
- 0.0
x1530:
- 0.0
- 0.0
- 0.0
x1531:
- 0.0
- 0.0
- 0.0
x1532:
- 0.0
- 0.0
- 0.0
x1533:
- 0.0
- 0.0
- 0.0
x1534:
- 0.0
- 0.0
- 0.0
x1535:
- 0.0
- 0.0
- 0.0
x1536:
- 0.0
- 0.0
- 0.0
x1537:
- 0.0
- 0.0
- 0.0
x1538:
- 0.0
- 0.0
- 0.0
x1539:
- 0.0
- 0.0
- 0.0
x154:
- 0.0
- 0.0
- 0.0
x1540:
- 0.0
- 0.0
- 0.0
x1541:
- 0.0
- 0.0
- 0.0
x1542:
- 0.0
- 0.0
- 0.0
x1543:
- 0.0
- 0.0
- 0.0
x1544:
- 0.0
- 0.0
- 0.0
x1545:
- 0.0
- 0.0
- 0.0
x1546:
- 0.0
- 0.0
- 0.0
x1547:
- 0.0
- 0.0
- 0.0
x1548:
- 0.0
- 0.0
- 0.0
x1549:
- 0.0
- 0.0
- 0.0
x155:
- 0.0
- 0.0
- 0.0
x1550:
- 0.0
- 0.0
- 0.0
x1551:
- 0.0
- 0.0
- 0.0
x1552:
- 0.0
- 0.0
- 0.0
x1553:
- 0.0
- 0.0
- 0.0
x1554:
- 0.0
- 0.0
- 0.0
x1555:
- 0.0
- 0.0
- 0.0
x1556:
- 0.0
- 0.0
- 0.0
x1557:
- 0.0
- 0.0
- 0.0
x1558:
- 0.0
- 0.0
- 0.0
x1559:
- 0.0
- 0.0
- 0.0
x156:
- 0.0
- 0.0
- 0.0
x1560:
- 0.0
- 0.0
- 0.0
x1561:
- 0.0
- 0.0
- 0.0
x1562:
- 0.0
- 0.0
- 0.0
x1563:
- 0.0
- 0.0
- 0.0
x1564:
- 0.0
- 0.0
- 0.0
x1565:
- 0.0
- 0.0
- 0.0
x1566:
- 0.0
- 0.0
- 0.0
x1567:
- 0.0
- 0.0
- 0.0
x1568:
- 0.0
- 0.0
- 0.0
x1569:
- 0.0
- 0.0
- 0.0
x157:
- 0.0
- 0.0
- 0.0
x1570:
- 0.0
- 0.0
- 0.0
x1571:
- 0.0
- 0.0
- 0.0
x1572:
- 0.0
- 0.0
- 0.0
x1573:
- 0.0
- 0.0
- 0.0
x1574:
- 0.0
- 0.0
- 0.0
x1575:
- 0.0
- 0.0
- 0.0
x1576:
- 0.0
- 0.0
- 0.0
x1577:
- 0.0
- 0.0
- 0.0
x1578:
- 0.0
- 0.0
- 0.0
x1579:
- 0.0
- 0.0
- 0.0
x158:
- 0.0
- 0.0
- 0.0
x1580:
- 0.0
- 0.0
- 0.0
x1581:
- 0.0
- 0.0
- 0.0
x1582:
- 0.0
- 0.0
- 0.0
x1583:
- 0.0
- 0.0
- 0.0
x1584:
- 0.0
- 0.0
- 0.0
x1585:
- 0.0
- 0.0
- 0.0
x1586:
- 0.0
- 0.0
- 0.0
x1587:
- 0.0
- 0.0
- 0.0
x1588:
- 0.0
- 0.0
- 0.0
x1589:
- 0.0
- 0.0
- 0.0
x159:
- 0.0
- 0.0
- 0.0
x1590:
- 0.0
- 0.0
- 0.0
x1591:
- 0.0
- 0.0
- 0.0
x1592:
- 0.0
- 0.0
- 0.0
x1593:
- 0.0
- 0.0
- 0.0
x1594:
- 0.0
- 0.0
- 0.0
x1595:
- 0.0
- 0.0
- 0.0
x1596:
- 0.0
- 0.0
- 0.0
x1597:
- 0.0
- 0.0
- 0.0
x1598:
- 0.0
- 0.0
- 0.0
x1599:
- 0.0
- 0.0
- 0.0
x16:
- 0.0
- 0.0
- 0.0
x160:
- 0.0
- 0.0
- 0.0
x1600:
- 0.0
- 0.0
- 0.0
x1601:
- 0.0
- 0.0
- 0.0
x1602:
- 0.0
- 0.0
- 0.0
x1603:
- 0.0
- 0.0
- 0.0
x1604:
- 0.0
- 0.0
- 0.0
x1605:
- 0.0
- 0.0
- 0.0
x1606:
- 0.0
- 0.0
- 0.0
x1607:
- 0.0
- 0.0
- 0.0
x1608:
- 0.0
- 0.0
- 0.0
x1609:
- 0.0
- 0.0
- 0.0
x161:
- 0.0
- 0.0
- 0.0
x1610:
- 0.0
- 0.0
- 0.0
x1611:
- 0.0
- 0.0
- 0.0
x1612:
- 0.0
- 0.0
- 0.0
x1613:
- 0.0
- 0.0
- 0.0
x1614:
- 0.0
- 0.0
- 0.0
x1615:
- 0.0
- 0.0
- 0.0
x1616:
- 0.0
- 0.0
- 0.0
x1617:
- 0.0
- 0.0
- 0.0
x1618:
- 0.0
- 0.0
- 0.0
x1619:
- 0.0
- 0.0
- 0.0
x162:
- 0.0
- 0.0
- 0.0
x1620:
- 0.0
- 0.0
- 0.0
x1621:
- 0.0
- 0.0
- 0.0
x1622:
- 0.0
- 0.0
- 0.0
x1623:
- 0.0
- 0.0
- 0.0
x1624:
- 0.0
- 0.0
- 0.0
x1625:
- 0.0
- 0.0
- 0.0
x1626:
- 0.0
- 0.0
- 0.0
x1627:
- 0.0
- 0.0
- 0.0
x1628:
- 0.0
- 0.0
- 0.0
x1629:
- 0.0
- 0.0
- 0.0
x163:
- 0.0
- 0.0
- 0.0
x1630:
- 0.0
- 0.0
- 0.0
x1631:
- 0.0
- 0.0
- 0.0
x1632:
- 0.0
- 0.0
- 0.0
x1633:
- 0.0
- 0.0
- 0.0
x1634:
- 0.0
- 0.0
- 0.0
x1635:
- 0.0
- 0.0
- 0.0
x1636:
- 0.0
- 0.0
- 0.0
x1637:
- 0.0
- 0.0
- 0.0
x1638:
- 0.0
- 0.0
- 0.0
x1639:
- 0.0
- 0.0
- 0.0
x164:
- 0.0
- 0.0
- 0.0
x1640:
- 0.0
- 0.0
- 0.0
x1641:
- 0.0
- 0.0
- 0.0
x1642:
- 0.0
- 0.0
- 0.0
x1643:
- 0.0
- 0.0
- 0.0
x1644:
- 0.0
- 0.0
- 0.0
x1645:
- 0.0
- 0.0
- 0.0
x1646:
- 0.0
- 0.0
- 0.0
x1647:
- 0.0
- 0.0
- 0.0
x1648:
- 0.0
- 0.0
- 0.0
x1649:
- 0.0
- 0.0
- 0.0
x165:
- 0.0
- 0.0
- 0.0
x1650:
- 0.0
- 0.0
- 0.0
x1651:
- 0.0
- 0.0
- 0.0
x1652:
- 0.0
- 0.0
- 0.0
x1653:
- 0.0
- 0.0
- 0.0
x1654:
- 0.0
- 0.0
- 0.0
x1655:
- 0.0
- 0.0
- 0.0
x1656:
- 0.0
- 0.0
- 0.0
x1657:
- 0.0
- 0.0
- 0.0
x1658:
- 0.0
- 0.0
- 0.0
x1659:
- 0.0
- 0.0
- 0.0
x166:
- 0.0
- 0.0
- 0.0
x1660:
- 0.0
- 0.0
- 0.0
x1661:
- 0.0
- 0.0
- 0.0
x1662:
- 0.0
- 0.0
- 0.0
x1663:
- 0.0
- 0.0
- 0.0
x1664:
- 0.0
- 0.0
- 0.0
x1665:
- 0.0
- 0.0
- 0.0
x1666:
- 0.0
- 0.0
- 0.0
x1667:
- 0.0
- 0.0
- 0.0
x1668:
- 0.0
- 0.0
- 0.0
x1669:
- 0.0
- 0.0
- 0.0
x167:
- 0.0
- 0.0
- 0.0
x1670:
- 0.0
- 0.0
- 0.0
x1671:
- 0.0
- 0.0
- 0.0
x1672:
- 0.0
- 0.0
- 0.0
x1673:
- 0.0
- 0.0
- 0.0
x1674:
- 0.0
- 0.0
- 0.0
x1675:
- 0.0
- 0.0
- 0.0
x1676:
- 0.0
- 0.0
- 0.0
x1677:
- 0.0
- 0.0
- 0.0
x1678:
- 0.0
- 0.0
- 0.0
x1679:
- 0.0
- 0.0
- 0.0
x168:
- 0.0
- 0.0
- 0.0
x1680:
- 0.0
- 0.0
- 0.0
x1681:
- 0.0
- 0.0
- 0.0
x1682:
- 0.0
- 0.0
- 0.0
x1683:
- 0.0
- 0.0
- 0.0
x1684:
- 0.0
- 0.0
- 0.0
x1685:
- 0.0
- 0.0
- 0.0
x1686:
- 0.0
- 0.0
- 0.0
x1687:
- 0.0
- 0.0
- 0.0
x1688:
- 0.0
- 0.0
- 0.0
x1689:
- 0.0
- 0.0
- 0.0
x169:
- 0.0
- 0.0
- 0.0
x1690:
- 0.0
- 0.0
- 0.0
x1691:
- 0.0
- 0.0
- 0.0
x1692:
- 0.0
- 0.0
- 0.0
x1693:
- 0.0
- 0.0
- 0.0
x1694:
- 0.0
- 0.0
- 0.0
x1695:
- 0.0
- 0.0
- 0.0
x1696:
- 0.0
- 0.0
- 0.0
x1697:
- 0.0
- 0.0
- 0.0
x1698:
- 0.0
- 0.0
- 0.0
x1699:
- 0.0
- 0.0
- 0.0
x17:
- 0.0
- 0.0
- 0.0
x170:
- 0.0
- 0.0
- 0.0
x1700:
- 0.0
- 0.0
- 0.0
x1701:
- 0.0
- 0.0
- 0.0
x1702:
- 0.0
- 0.0
- 0.0
x1703:
- 0.0
- 0.0
- 0.0
x1704:
- 0.0
- 0.0
- 0.0
x1705:
- 0.0
- 0.0
- 0.0
x1706:
- 0.0
- 0.0
- 0.0
x1707:
- 0.0
- 0.0
- 0.0
x1708:
- 0.0
- 0.0
- 0.0
x1709:
- 0.0
- 0.0
- 0.0
x171:
- 0.0
- 0.0
- 0.0
x1710:
- 0.0
- 0.0
- 0.0
x1711:
- 0.0
- 0.0
- 0.0
x1712:
- 0.0
- 0.0
- 0.0
x1713:
- 0.0
- 0.0
- 0.0
x1714:
- 0.0
- 0.0
- 0.0
x1715:
- 0.0
- 0.0
- 0.0
x1716:
- 0.0
- 0.0
- 0.0
x1717:
- 0.0
- 0.0
- 0.0
x1718:
- 0.0
- 0.0
- 0.0
x1719:
- 0.0
- 0.0
- 0.0
x172:
- 0.0
- 0.0
- 0.0
x1720:
- 0.0
- 0.0
- 0.0
x1721:
- 0.0
- 0.0
- 0.0
x1722:
- 0.0
- 0.0
- 0.0
x1723:
- 0.0
- 0.0
- 0.0
x1724:
- 0.0
- 0.0
- 0.0
x1725:
- 0.0
- 0.0
- 0.0
x1726:
- 0.0
- 0.0
- 0.0
x1727:
- 0.0
- 0.0
- 0.0
x1728:
- 0.0
- 0.0
- 0.0
x1729:
- 0.0
- 0.0
- 0.0
x173:
- 0.0
- 0.0
- 0.0
x1730:
- 0.0
- 0.0
- 0.0
x1731:
- 0.0
- 0.0
- 0.0
x1732:
- 0.0
- 0.0
- 0.0
x1733:
- 0.0
- 0.0
- 0.0
x1734:
- 0.0
- 0.0
- 0.0
x1735:
- 0.0
- 0.0
- 0.0
x1736:
- 0.0
- 0.0
- 0.0
x1737:
- 0.0
- 0.0
- 0.0
x1738:
- 0.0
- 0.0
- 0.0
x1739:
- 0.0
- 0.0
- 0.0
x174:
- 0.0
- 0.0
- 0.0
x1740:
- 0.0
- 0.0
- 0.0
x1741:
- 0.0
- 0.0
- 0.0
x1742:
- 0.0
- 0.0
- 0.0
x1743:
- 0.0
- 0.0
- 0.0
x1744:
- 0.0
- 0.0
- 0.0
x1745:
- 0.0
- 0.0
- 0.0
x1746:
- 0.0
- 0.0
- 0.0
x1747:
- 0.0
- 0.0
- 0.0
x1748:
- 0.0
- 0.0
- 0.0
x1749:
- 0.0
- 0.0
- 0.0
x175:
- 0.0
- 0.0
- 0.0
x1750:
- 0.0
- 0.0
- 0.0
x1751:
- 0.0
- 0.0
- 0.0
x1752:
- 0.0
- 0.0
- 0.0
x1753:
- 0.0
- 0.0
- 0.0
x1754:
- 0.0
- 0.0
- 0.0
x1755:
- 0.0
- 0.0
- 0.0
x1756:
- 0.0
- 0.0
- 0.0
x1757:
- 0.0
- 0.0
- 0.0
x1758:
- 0.0
- 0.0
- 0.0
x1759:
- 0.0
- 0.0
- 0.0
x176:
- 0.0
- 0.0
- 0.0
x1760:
- 0.0
- 0.0
- 0.0
x1761:
- 0.0
- 0.0
- 0.0
x1762:
- 0.0
- 0.0
- 0.0
x1763:
- 0.0
- 0.0
- 0.0
x1764:
- 0.0
- 0.0
- 0.0
x1765:
- 0.0
- 0.0
- 0.0
x1766:
- 0.0
- 0.0
- 0.0
x1767:
- 0.0
- 0.0
- 0.0
x1768:
- 0.0
- 0.0
- 0.0
x1769:
- 0.0
- 0.0
- 0.0
x177:
- 0.0
- 0.0
- 0.0
x1770:
- 0.0
- 0.0
- 0.0
x1771:
- 0.0
- 0.0
- 0.0
x1772:
- 0.0
- 0.0
- 0.0
x1773:
- 0.0
- 0.0
- 0.0
x1774:
- 0.0
- 0.0
- 0.0
x1775:
- 0.0
- 0.0
- 0.0
x1776:
- 0.0
- 0.0
- 0.0
x1777:
- 0.0
- 0.0
- 0.0
x1778:
- 0.0
- 0.0
- 0.0
x1779:
- 0.0
- 0.0
- 0.0
x178:
- 0.0
- 0.0
- 0.0
x1780:
- 0.0
- 0.0
- 0.0
x1781:
- 0.0
- 0.0
- 0.0
x1782:
- 0.0
- 0.0
- 0.0
x1783:
- 0.0
- 0.0
- 0.0
x1784:
- 0.0
- 0.0
- 0.0
x1785:
- 0.0
- 0.0
- 0.0
x1786:
- 0.0
- 0.0
- 0.0
x1787:
- 0.0
- 0.0
- 0.0
x1788:
- 0.0
- 0.0
- 0.0
x1789:
- 0.0
- 0.0
- 0.0
x179:
- 0.0
- 0.0
- 0.0
x1790:
- 0.0
- 0.0
- 0.0
x1791:
- 0.0
- 0.0
- 0.0
x1792:
- 0.0
- 0.0
- 0.0
x1793:
- 0.0
- 0.0
- 0.0
x1794:
- 0.0
- 0.0
- 0.0
x1795:
- 0.0
- 0.0
- 0.0
x1796:
- 0.0
- 0.0
- 0.0
x1797:
- 0.0
- 0.0
- 0.0
x1798:
- 0.0
- 0.0
- 0.0
x1799:
- 0.0
- 0.0
- 0.0
x18:
- 0.0
- 0.0
- 0.0
x180:
- 0.0
- 0.0
- 0.0
x1800:
- 0.0
- 0.0
- 0.0
x1801:
- 0.0
- 0.0
- 0.0
x1802:
- 0.0
- 0.0
- 0.0
x1803:
- 0.0
- 0.0
- 0.0
x1804:
- 0.0
- 0.0
- 0.0
x1805:
- 0.0
- 0.0
- 0.0
x1806:
- 0.0
- 0.0
- 0.0
x1807:
- 0.0
- 0.0
- 0.0
x1808:
- 0.0
- 0.0
- 0.0
x1809:
- 0.0
- 0.0
- 0.0
x181:
- 0.0
- 0.0
- 0.0
x1810:
- 0.0
- 0.0
- 0.0
x1811:
- 0.0
- 0.0
- 0.0
x1812:
- 0.0
- 0.0
- 0.0
x1813:
- 0.0
- 0.0
- 0.0
x1814:
- 0.0
- 0.0
- 0.0
x1815:
- 0.0
- 0.0
- 0.0
x1816:
- 0.0
- 0.0
- 0.0
x1817:
- 0.0
- 0.0
- 0.0
x1818:
- 0.0
- 0.0
- 0.0
x1819:
- 0.0
- 0.0
- 0.0
x182:
- 0.0
- 0.0
- 0.0
x1820:
- 0.0
- 0.0
- 0.0
x1821:
- 0.0
- 0.0
- 0.0
x1822:
- 0.0
- 0.0
- 0.0
x1823:
- 0.0
- 0.0
- 0.0
x1824:
- 0.0
- 0.0
- 0.0
x1825:
- 0.0
- 0.0
- 0.0
x1826:
- 0.0
- 0.0
- 0.0
x1827:
- 0.0
- 0.0
- 0.0
x1828:
- 0.0
- 0.0
- 0.0
x1829:
- 0.0
- 0.0
- 0.0
x183:
- 0.0
- 0.0
- 0.0
x1830:
- 0.0
- 0.0
- 0.0
x1831:
- 0.0
- 0.0
- 0.0
x1832:
- 0.0
- 0.0
- 0.0
x1833:
- 0.0
- 0.0
- 0.0
x1834:
- 0.0
- 0.0
- 0.0
x1835:
- 0.0
- 0.0
- 0.0
x1836:
- 0.0
- 0.0
- 0.0
x1837:
- 0.0
- 0.0
- 0.0
x1838:
- 0.0
- 0.0
- 0.0
x1839:
- 0.0
- 0.0
- 0.0
x184:
- 0.0
- 0.0
- 0.0
x1840:
- 0.0
- 0.0
- 0.0
x1841:
- 0.0
- 0.0
- 0.0
x1842:
- 0.0
- 0.0
- 0.0
x1843:
- 0.0
- 0.0
- 0.0
x1844:
- 0.0
- 0.0
- 0.0
x1845:
- 0.0
- 0.0
- 0.0
x1846:
- 0.0
- 0.0
- 0.0
x1847:
- 0.0
- 0.0
- 0.0
x1848:
- 0.0
- 0.0
- 0.0
x1849:
- 0.0
- 0.0
- 0.0
x185:
- 0.0
- 0.0
- 0.0
x1850:
- 0.0
- 0.0
- 0.0
x1851:
- 0.0
- 0.0
- 0.0
x1852:
- 0.0
- 0.0
- 0.0
x1853:
- 0.0
- 0.0
- 0.0
x1854:
- 0.0
- 0.0
- 0.0
x1855:
- 0.0
- 0.0
- 0.0
x1856:
- 0.0
- 0.0
- 0.0
x1857:
- 0.0
- 0.0
- 0.0
x1858:
- 0.0
- 0.0
- 0.0
x1859:
- 0.0
- 0.0
- 0.0
x186:
- 0.0
- 0.0
- 0.0
x1860:
- 0.0
- 0.0
- 0.0
x1861:
- 0.0
- 0.0
- 0.0
x1862:
- 0.0
- 0.0
- 0.0
x1863:
- 0.0
- 0.0
- 0.0
x1864:
- 0.0
- 0.0
- 0.0
x1865:
- 0.0
- 0.0
- 0.0
x1866:
- 0.0
- 0.0
- 0.0
x1867:
- 0.0
- 0.0
- 0.0
x1868:
- 0.0
- 0.0
- 0.0
x1869:
- 0.0
- 0.0
- 0.0
x187:
- 0.0
- 0.0
- 0.0
x1870:
- 0.0
- 0.0
- 0.0
x1871:
- 0.0
- 0.0
- 0.0
x1872:
- 0.0
- 0.0
- 0.0
x1873:
- 0.0
- 0.0
- 0.0
x1874:
- 0.0
- 0.0
- 0.0
x1875:
- 0.0
- 0.0
- 0.0
x1876:
- 0.0
- 0.0
- 0.0
x1877:
- 0.0
- 0.0
- 0.0
x1878:
- 0.0
- 0.0
- 0.0
x1879:
- 0.0
- 0.0
- 0.0
x188:
- 0.0
- 0.0
- 0.0
x1880:
- 0.0
- 0.0
- 0.0
x1881:
- 0.0
- 0.0
- 0.0
x1882:
- 0.0
- 0.0
- 0.0
x1883:
- 0.0
- 0.0
- 0.0
x1884:
- 0.0
- 0.0
- 0.0
x1885:
- 0.0
- 0.0
- 0.0
x1886:
- 0.0
- 0.0
- 0.0
x1887:
- 0.0
- 0.0
- 0.0
x1888:
- 0.0
- 0.0
- 0.0
x1889:
- 0.0
- 0.0
- 0.0
x189:
- 0.0
- 0.0
- 0.0
x1890:
- 0.0
- 0.0
- 0.0
x1891:
- 0.0
- 0.0
- 0.0
x1892:
- 0.0
- 0.0
- 0.0
x1893:
- 0.0
- 0.0
- 0.0
x1894:
- 0.0
- 0.0
- 0.0
x1895:
- 0.0
- 0.0
- 0.0
x1896:
- 0.0
- 0.0
- 0.0
x1897:
- 0.0
- 0.0
- 0.0
x1898:
- 0.0
- 0.0
- 0.0
x1899:
- 0.0
- 0.0
- 0.0
x19:
- 0.0
- 0.0
- 1.0
x190:
- 0.0
- 0.0
- 0.0
x1900:
- 0.0
- 0.0
- 0.0
x1901:
- 0.0
- 0.0
- 0.0
x1902:
- 0.0
- 0.0
- 0.0
x1903:
- 0.0
- 0.0
- 0.0
x1904:
- 0.0
- 0.0
- 0.0
x1905:
- 0.0
- 0.0
- 0.0
x1906:
- 0.0
- 0.0
- 0.0
x1907:
- 0.0
- 0.0
- 0.0
x1908:
- 0.0
- 0.0
- 0.0
x1909:
- 0.0
- 0.0
- 0.0
x191:
- 0.0
- 0.0
- 0.0
x1910:
- 0.0
- 0.0
- 0.0
x1911:
- 0.0
- 0.0
- 0.0
x1912:
- 0.0
- 0.0
- 0.0
x1913:
- 0.0
- 0.0
- 0.0
x1914:
- 0.0
- 0.0
- 0.0
x1915:
- 0.0
- 0.0
- 0.0
x1916:
- 0.0
- 0.0
- 0.0
x1917:
- 0.0
- 0.0
- 0.0
x1918:
- 0.0
- 0.0
- 0.0
x1919:
- 0.0
- 0.0
- 0.0
x192:
- 0.0
- 0.0
- 0.0
x1920:
- 0.0
- 0.0
- 0.0
x1921:
- 0.0
- 0.0
- 0.0
x1922:
- 0.0
- 0.0
- 0.0
x1923:
- 0.0
- 0.0
- 0.0
x1924:
- 0.0
- 0.0
- 0.0
x1925:
- 0.0
- 0.0
- 0.0
x1926:
- 0.0
- 0.0
- 0.0
x1927:
- 0.0
- 0.0
- 0.0
x1928:
- 0.0
- 0.0
- 0.0
x1929:
- 0.0
- 0.0
- 0.0
x193:
- 0.0
- 0.0
- 0.0
x1930:
- 0.0
- 0.0
- 0.0
x1931:
- 0.0
- 0.0
- 0.0
x1932:
- 0.0
- 0.0
- 0.0
x1933:
- 0.0
- 0.0
- 0.0
x1934:
- 0.0
- 0.0
- 0.0
x1935:
- 0.0
- 0.0
- 0.0
x1936:
- 0.0
- 0.0
- 0.0
x1937:
- 0.0
- 0.0
- 0.0
x1938:
- 0.0
- 0.0
- 0.0
x1939:
- 0.0
- 0.0
- 0.0
x194:
- 0.0
- 0.0
- 0.0
x1940:
- 0.0
- 0.0
- 0.0
x1941:
- 0.0
- 0.0
- 0.0
x1942:
- 0.0
- 0.0
- 0.0
x1943:
- 0.0
- 0.0
- 0.0
x1944:
- 0.0
- 0.0
- 0.0
x1945:
- 0.0
- 0.0
- 0.0
x1946:
- 0.0
- 0.0
- 0.0
x1947:
- 0.0
- 0.0
- 0.0
x1948:
- 0.0
- 0.0
- 0.0
x1949:
- 0.0
- 0.0
- 0.0
x195:
- 0.0
- 0.0
- 0.0
x1950:
- 0.0
- 0.0
- 0.0
x1951:
- 0.0
- 0.0
- 0.0
x1952:
- 0.0
- 0.0
- 0.0
x1953:
- 0.0
- 0.0
- 0.0
x1954:
- 0.0
- 0.0
- 0.0
x1955:
- 0.0
- 0.0
- 0.0
x1956:
- 0.0
- 0.0
- 0.0
x1957:
- 0.0
- 0.0
- 0.0
x1958:
- 0.0
- 0.0
- 0.0
x1959:
- 0.0
- 0.0
- 0.0
x196:
- 0.0
- 0.0
- 0.0
x1960:
- 0.0
- 0.0
- 0.0
x1961:
- 0.0
- 0.0
- 0.0
x1962:
- 0.0
- 0.0
- 0.0
x1963:
- 0.0
- 0.0
- 0.0
x1964:
- 0.0
- 0.0
- 0.0
x1965:
- 0.0
- 0.0
- 0.0
x1966:
- 0.0
- 0.0
- 0.0
x1967:
- 0.0
- 0.0
- 0.0
x1968:
- 0.0
- 0.0
- 0.0
x1969:
- 0.0
- 0.0
- 0.0
x197:
- 0.0
- 0.0
- 0.0
x1970:
- 0.0
- 0.0
- 0.0
x1971:
- 0.0
- 0.0
- 0.0
x1972:
- 0.0
- 0.0
- 0.0
x1973:
- 0.0
- 0.0
- 0.0
x1974:
- 0.0
- 0.0
- 0.0
x1975:
- 0.0
- 0.0
- 0.0
x1976:
- 0.0
- 0.0
- 0.0
x1977:
- 0.0
- 0.0
- 0.0
x1978:
- 0.0
- 0.0
- 0.0
x1979:
- 0.0
- 0.0
- 0.0
x198:
- 0.0
- 0.0
- 0.0
x1980:
- 0.0
- 0.0
- 0.0
x1981:
- 0.0
- 0.0
- 0.0
x1982:
- 0.0
- 0.0
- 0.0
x1983:
- 0.0
- 0.0
- 0.0
x1984:
- 0.0
- 0.0
- 0.0
x1985:
- 0.0
- 0.0
- 0.0
x1986:
- 0.0
- 0.0
- 0.0
x1987:
- 0.0
- 0.0
- 0.0
x1988:
- 0.0
- 0.0
- 0.0
x1989:
- 0.0
- 0.0
- 0.0
x199:
- 0.0
- 0.0
- 0.0
x1990:
- 0.0
- 0.0
- 0.0
x1991:
- 0.0
- 0.0
- 0.0
x1992:
- 0.0
- 0.0
- 0.0
x1993:
- 0.0
- 0.0
- 0.0
x1994:
- 0.0
- 0.0
- 0.0
x1995:
- 0.0
- 0.0
- 0.0
x1996:
- 0.0
- 0.0
- 0.0
x1997:
- 0.0
- 0.0
- 0.0
x1998:
- 0.0
- 0.0
- 0.0
x1999:
- 0.0
- 0.0
- 0.0
x2:
- 0.0
- 0.0
- 0.0
x20:
- 0.0
- 0.0
- 0.0
x200:
- 0.0
- 0.0
- 0.0
x2000:
- 0.0
- 0.0
- 0.0
x2001:
- 0.0
- 0.0
- 0.0
x2002:
- 0.0
- 0.0
- 0.0
x2003:
- 0.0
- 0.0
- 0.0
x2004:
- 0.0
- 0.0
- 0.0
x2005:
- 0.0
- 0.0
- 0.0
x2006:
- 0.0
- 0.0
- 0.0
x2007:
- 0.0
- 0.0
- 0.0
x2008:
- 0.0
- 0.0
- 0.0
x2009:
- 0.0
- 0.0
- 0.0
x201:
- 0.0
- 0.0
- 0.0
x2010:
- 0.0
- 0.0
- 0.0
x2011:
- 0.0
- 0.0
- 0.0
x2012:
- 0.0
- 0.0
- 0.0
x2013:
- 0.0
- 0.0
- 0.0
x2014:
- 0.0
- 0.0
- 0.0
x2015:
- 0.0
- 0.0
- 0.0
x2016:
- 0.0
- 0.0
- 0.0
x2017:
- 0.0
- 0.0
- 0.0
x2018:
- 0.0
- 0.0
- 0.0
x2019:
- 0.0
- 0.0
- 0.0
x202:
- 0.0
- 0.0
- 0.0
x2020:
- 0.0
- 0.0
- 0.0
x2021:
- 0.0
- 0.0
- 0.0
x2022:
- 0.0
- 0.0
- 0.0
x2023:
- 0.0
- 0.0
- 0.0
x2024:
- 0.0
- 0.0
- 0.0
x2025:
- 0.0
- 0.0
- 0.0
x2026:
- 0.0
- 0.0
- 0.0
x2027:
- 0.0
- 0.0
- 0.0
x2028:
- 0.0
- 0.0
- 0.0
x2029:
- 0.0
- 0.0
- 0.0
x203:
- 0.0
- 0.0
- 0.0
x2030:
- 0.0
- 0.0
- 0.0
x2031:
- 0.0
- 0.0
- 0.0
x2032:
- 0.0
- 0.0
- 0.0
x2033:
- 0.0
- 0.0
- 0.0
x2034:
- 0.0
- 0.0
- 0.0
x2035:
- 0.0
- 0.0
- 0.0
x2036:
- 0.0
- 0.0
- 0.0
x2037:
- 0.0
- 0.0
- 0.0
x2038:
- 0.0
- 0.0
- 0.0
x2039:
- 0.0
- 0.0
- 0.0
x204:
- 0.0
- 0.0
- 0.0
x2040:
- 0.0
- 0.0
- 0.0
x2041:
- 0.0
- 0.0
- 0.0
x2042:
- 0.0
- 0.0
- 0.0
x2043:
- 0.0
- 0.0
- 0.0
x2044:
- 0.0
- 0.0
- 0.0
x2045:
- 0.0
- 0.0
- 0.0
x2046:
- 0.0
- 0.0
- 0.0
x2047:
- 0.0
- 0.0
- 0.0
x2048:
- 0.0
- 0.0
- 0.0
x2049:
- 0.0
- 0.0
- 0.0
x205:
- 0.0
- 0.0
- 0.0
x2050:
- 0.0
- 0.0
- 0.0
x2051:
- 0.0
- 0.0
- 0.0
x2052:
- 0.0
- 0.0
- 0.0
x2053:
- 0.0
- 0.0
- 0.0
x2054:
- 0.0
- 0.0
- 0.0
x2055:
- 0.0
- 0.0
- 0.0
x2056:
- 0.0
- 0.0
- 0.0
x2057:
- 0.0
- 0.0
- 0.0
x2058:
- 0.0
- 0.0
- 0.0
x2059:
- 0.0
- 0.0
- 0.0
x206:
- 0.0
- 0.0
- 0.0
x2060:
- 0.0
- 0.0
- 0.0
x2061:
- 0.0
- 0.0
- 0.0
x2062:
- 0.0
- 0.0
- 0.0
x2063:
- 0.0
- 0.0
- 0.0
x2064:
- 0.0
- 0.0
- 0.0
x2065:
- 0.0
- 0.0
- 0.0
x2066:
- 0.0
- 0.0
- 0.0
x2067:
- 0.0
- 0.0
- 0.0
x2068:
- 0.0
- 0.0
- 0.0
x2069:
- 0.0
- 0.0
- 0.0
x207:
- 0.0
- 0.0
- 0.0
x2070:
- 0.0
- 0.0
- 0.0
x2071:
- 0.0
- 0.0
- 0.0
x2072:
- 0.0
- 0.0
- 0.0
x2073:
- 0.0
- 0.0
- 0.0
x2074:
- 0.0
- 0.0
- 0.0
x2075:
- 0.0
- 0.0
- 0.0
x2076:
- 0.0
- 0.0
- 0.0
x2077:
- 0.0
- 0.0
- 0.0
x2078:
- 0.0
- 0.0
- 0.0
x2079:
- 0.0
- 0.0
- 0.0
x208:
- 0.0
- 0.0
- 0.0
x2080:
- 0.0
- 0.0
- 0.0
x2081:
- 0.0
- 0.0
- 0.0
x2082:
- 0.0
- 0.0
- 0.0
x2083:
- 0.0
- 0.0
- 0.0
x2084:
- 0.0
- 0.0
- 0.0
x2085:
- 0.0
- 0.0
- 0.0
x2086:
- 0.0
- 0.0
- 0.0
x2087:
- 0.0
- 0.0
- 0.0
x2088:
- 0.0
- 0.0
- 0.0
x2089:
- 0.0
- 0.0
- 0.0
x209:
- 0.0
- 0.0
- 0.0
x2090:
- 0.0
- 0.0
- 0.0
x2091:
- 0.0
- 0.0
- 0.0
x2092:
- 0.0
- 0.0
- 0.0
x2093:
- 0.0
- 0.0
- 0.0
x2094:
- 0.0
- 0.0
- 0.0
x2095:
- 0.0
- 0.0
- 0.0
x2096:
- 0.0
- 0.0
- 0.0
x2097:
- 0.0
- 0.0
- 0.0
x2098:
- 0.0
- 0.0
- 0.0
x2099:
- 0.0
- 0.0
- 0.0
x21:
- 0.0
- 0.0
- 0.0
x210:
- 0.0
- 0.0
- 0.0
x2100:
- 0.0
- 0.0
- 0.0
x2101:
- 0.0
- 0.0
- 0.0
x2102:
- 0.0
- 0.0
- 0.0
x2103:
- 0.0
- 0.0
- 0.0
x2104:
- 0.0
- 0.0
- 0.0
x2105:
- 0.0
- 0.0
- 0.0
x2106:
- 0.0
- 0.0
- 0.0
x2107:
- 0.0
- 0.0
- 0.0
x2108:
- 0.0
- 0.0
- 0.0
x2109:
- 0.0
- 0.0
- 0.0
x211:
- 0.0
- 0.0
- 0.0
x2110:
- 0.0
- 0.0
- 0.0
x2111:
- 0.0
- 0.0
- 0.0
x2112:
- 0.0
- 0.0
- 0.0
x2113:
- 0.0
- 0.0
- 0.0
x2114:
- 0.0
- 0.0
- 0.0
x2115:
- 0.0
- 0.0
- 0.0
x2116:
- 0.0
- 0.0
- 0.0
x2117:
- 0.0
- 0.0
- 0.0
x2118:
- 0.0
- 0.0
- 0.0
x2119:
- 0.0
- 0.0
- 0.0
x212:
- 0.0
- 0.0
- 0.0
x2120:
- 0.0
- 0.0
- 0.0
x2121:
- 0.0
- 0.0
- 0.0
x2122:
- 0.0
- 0.0
- 0.0
x2123:
- 0.0
- 0.0
- 0.0
x2124:
- 0.0
- 0.0
- 0.0
x2125:
- 0.0
- 0.0
- 0.0
x2126:
- 0.0
- 0.0
- 0.0
x2127:
- 0.0
- 0.0
- 0.0
x2128:
- 0.0
- 0.0
- 0.0
x2129:
- 0.0
- 0.0
- 0.0
x213:
- 0.0
- 0.0
- 0.0
x2130:
- 0.0
- 0.0
- 0.0
x2131:
- 0.0
- 0.0
- 0.0
x2132:
- 0.0
- 0.0
- 0.0
x2133:
- 0.0
- 0.0
- 0.0
x2134:
- 0.0
- 0.0
- 0.0
x2135:
- 0.0
- 0.0
- 0.0
x2136:
- 0.0
- 0.0
- 0.0
x2137:
- 0.0
- 0.0
- 0.0
x2138:
- 0.0
- 0.0
- 0.0
x2139:
- 0.0
- 0.0
- 0.0
x214:
- 0.0
- 0.0
- 0.0
x2140:
- 0.0
- 0.0
- 0.0
x2141:
- 0.0
- 0.0
- 0.0
x2142:
- 0.0
- 0.0
- 0.0
x2143:
- 0.0
- 0.0
- 0.0
x2144:
- 0.0
- 0.0
- 0.0
x2145:
- 0.0
- 0.0
- 0.0
x2146:
- 0.0
- 0.0
- 0.0
x2147:
- 0.0
- 0.0
- 0.0
x2148:
- 0.0
- 0.0
- 0.0
x2149:
- 0.0
- 0.0
- 0.0
x215:
- 0.0
- 0.0
- 0.0
x2150:
- 0.0
- 0.0
- 0.0
x2151:
- 0.0
- 0.0
- 0.0
x2152:
- 0.0
- 0.0
- 0.0
x2153:
- 0.0
- 0.0
- 0.0
x2154:
- 0.0
- 0.0
- 0.0
x2155:
- 0.0
- 0.0
- 0.0
x2156:
- 0.0
- 0.0
- 0.0
x2157:
- 0.0
- 0.0
- 0.0
x2158:
- 0.0
- 0.0
- 0.0
x2159:
- 0.0
- 0.0
- 0.0
x216:
- 0.0
- 0.0
- 0.0
x2160:
- 0.0
- 0.0
- 0.0
x2161:
- 0.0
- 0.0
- 0.0
x2162:
- 0.0
- 0.0
- 0.0
x2163:
- 0.0
- 0.0
- 0.0
x2164:
- 0.0
- 0.0
- 0.0
x2165:
- 0.0
- 0.0
- 0.0
x2166:
- 0.0
- 0.0
- 0.0
x2167:
- 0.0
- 0.0
- 0.0
x2168:
- 0.0
- 0.0
- 0.0
x2169:
- 0.0
- 0.0
- 0.0
x217:
- 0.0
- 0.0
- 0.0
x2170:
- 0.0
- 0.0
- 0.0
x2171:
- 0.0
- 0.0
- 0.0
x2172:
- 0.0
- 0.0
- 0.0
x2173:
- 0.0
- 0.0
- 0.0
x2174:
- 0.0
- 0.0
- 0.0
x2175:
- 0.0
- 0.0
- 0.0
x2176:
- 0.0
- 0.0
- 0.0
x2177:
- 0.0
- 0.0
- 0.0
x2178:
- 0.0
- 0.0
- 0.0
x2179:
- 0.0
- 0.0
- 0.0
x218:
- 0.0
- 0.0
- 0.0
x2180:
- 0.0
- 0.0
- 0.0
x2181:
- 0.0
- 0.0
- 0.0
x2182:
- 0.0
- 0.0
- 0.0
x2183:
- 0.0
- 0.0
- 0.0
x2184:
- 0.0
- 0.0
- 0.0
x2185:
- 0.0
- 0.0
- 0.0
x2186:
- 0.0
- 0.0
- 0.0
x2187:
- 0.0
- 0.0
- 0.0
x2188:
- 0.0
- 0.0
- 0.0
x2189:
- 0.0
- 0.0
- 0.0
x219:
- 0.0
- 0.0
- 0.0
x2190:
- 0.0
- 0.0
- 0.0
x2191:
- 0.0
- 0.0
- 0.0
x2192:
- 0.0
- 0.0
- 0.0
x2193:
- 0.0
- 0.0
- 0.0
x2194:
- 0.0
- 0.0
- 0.0
x2195:
- 0.0
- 0.0
- 0.0
x2196:
- 0.0
- 0.0
- 0.0
x2197:
- 0.0
- 0.0
- 0.0
x2198:
- 0.0
- 0.0
- 0.0
x2199:
- 0.0
- 0.0
- 0.0
x22:
- 0.0
- 0.0
- 0.0
x220:
- 0.0
- 0.0
- 0.0
x2200:
- 0.0
- 0.0
- 0.0
x2201:
- 0.0
- 0.0
- 0.0
x2202:
- 0.0
- 0.0
- 0.0
x2203:
- 0.0
- 0.0
- 0.0
x2204:
- 0.0
- 0.0
- 0.0
x2205:
- 0.0
- 0.0
- 0.0
x2206:
- 0.0
- 0.0
- 0.0
x2207:
- 0.0
- 0.0
- 0.0
x2208:
- 0.0
- 0.0
- 0.0
x2209:
- 0.0
- 0.0
- 0.0
x221:
- 0.0
- 0.0
- 0.0
x2210:
- 0.0
- 0.0
- 0.0
x2211:
- 0.0
- 0.0
- 0.0
x2212:
- 0.0
- 0.0
- 0.0
x2213:
- 0.0
- 0.0
- 0.0
x2214:
- 0.0
- 0.0
- 0.0
x2215:
- 0.0
- 0.0
- 0.0
x2216:
- 0.0
- 0.0
- 0.0
x2217:
- 0.0
- 0.0
- 0.0
x2218:
- 0.0
- 0.0
- 0.0
x2219:
- 0.0
- 0.0
- 0.0
x222:
- 0.0
- 0.0
- 0.0
x2220:
- 0.0
- 0.0
- 0.0
x2221:
- 0.0
- 0.0
- 0.0
x2222:
- 0.0
- 0.0
- 0.0
x2223:
- 0.0
- 0.0
- 0.0
x2224:
- 0.0
- 0.0
- 0.0
x2225:
- 0.0
- 0.0
- 0.0
x2226:
- 0.0
- 0.0
- 0.0
x2227:
- 0.0
- 0.0
- 0.0
x2228:
- 0.0
- 0.0
- 0.0
x2229:
- 0.0
- 0.0
- 0.0
x223:
- 0.0
- 0.0
- 0.0
x2230:
- 0.0
- 0.0
- 0.0
x2231:
- 0.0
- 0.0
- 0.0
x2232:
- 0.0
- 0.0
- 0.0
x2233:
- 0.0
- 0.0
- 0.0
x2234:
- 0.0
- 0.0
- 0.0
x2235:
- 0.0
- 0.0
- 0.0
x2236:
- 0.0
- 0.0
- 0.0
x2237:
- 0.0
- 0.0
- 0.0
x2238:
- 0.0
- 0.0
- 0.0
x2239:
- 0.0
- 0.0
- 0.0
x224:
- 0.0
- 0.0
- 0.0
x2240:
- 0.0
- 0.0
- 0.0
x2241:
- 0.0
- 0.0
- 0.0
x2242:
- 0.0
- 0.0
- 0.0
x2243:
- 0.0
- 0.0
- 0.0
x2244:
- 0.0
- 0.0
- 0.0
x2245:
- 0.0
- 0.0
- 0.0
x2246:
- 0.0
- 0.0
- 0.0
x2247:
- 0.0
- 0.0
- 0.0
x2248:
- 0.0
- 0.0
- 0.0
x2249:
- 0.0
- 0.0
- 0.0
x225:
- 0.0
- 0.0
- 0.0
x2250:
- 0.0
- 0.0
- 0.0
x2251:
- 0.0
- 0.0
- 0.0
x2252:
- 0.0
- 0.0
- 0.0
x2253:
- 0.0
- 0.0
- 0.0
x2254:
- 0.0
- 0.0
- 0.0
x2255:
- 0.0
- 0.0
- 0.0
x2256:
- 0.0
- 0.0
- 0.0
x2257:
- 0.0
- 0.0
- 0.0
x2258:
- 0.0
- 0.0
- 0.0
x2259:
- 0.0
- 0.0
- 0.0
x226:
- 0.0
- 0.0
- 0.0
x2260:
- 0.0
- 0.0
- 0.0
x2261:
- 0.0
- 0.0
- 0.0
x2262:
- 0.0
- 0.0
- 0.0
x2263:
- 0.0
- 0.0
- 0.0
x2264:
- 0.0
- 0.0
- 0.0
x2265:
- 0.0
- 0.0
- 0.0
x2266:
- 0.0
- 0.0
- 0.0
x2267:
- 0.0
- 0.0
- 0.0
x2268:
- 0.0
- 0.0
- 0.0
x2269:
- 0.0
- 0.0
- 0.0
x227:
- 0.0
- 0.0
- 0.0
x2270:
- 0.0
- 0.0
- 0.0
x2271:
- 0.0
- 0.0
- 0.0
x2272:
- 0.0
- 0.0
- 0.0
x2273:
- 0.0
- 0.0
- 0.0
x2274:
- 0.0
- 0.0
- 0.0
x2275:
- 0.0
- 0.0
- 0.0
x2276:
- 0.0
- 0.0
- 0.0
x2277:
- 0.0
- 0.0
- 0.0
x2278:
- 0.0
- 0.0
- 0.0
x2279:
- 0.0
- 0.0
- 0.0
x228:
- 0.0
- 0.0
- 0.0
x2280:
- 0.0
- 0.0
- 0.0
x2281:
- 0.0
- 0.0
- 0.0
x2282:
- 0.0
- 0.0
- 0.0
x2283:
- 0.0
- 0.0
- 0.0
x2284:
- 0.0
- 0.0
- 0.0
x2285:
- 0.0
- 0.0
- 0.0
x2286:
- 0.0
- 0.0
- 0.0
x2287:
- 0.0
- 0.0
- 0.0
x2288:
- 0.0
- 0.0
- 0.0
x2289:
- 0.0
- 0.0
- 0.0
x229:
- 0.0
- 0.0
- 0.0
x2290:
- 0.0
- 0.0
- 0.0
x2291:
- 0.0
- 0.0
- 0.0
x2292:
- 0.0
- 0.0
- 0.0
x2293:
- 0.0
- 0.0
- 0.0
x2294:
- 0.0
- 0.0
- 0.0
x2295:
- 0.0
- 0.0
- 0.0
x2296:
- 0.0
- 0.0
- 0.0
x2297:
- 0.0
- 0.0
- 0.0
x2298:
- 0.0
- 0.0
- 0.0
x2299:
- 0.0
- 0.0
- 0.0
x23:
- 0.0
- 0.0
- 0.0
x230:
- 0.0
- 0.0
- 0.0
x2300:
- 0.0
- 0.0
- 0.0
x2301:
- 0.0
- 0.0
- 0.0
x2302:
- 0.0
- 0.0
- 0.0
x2303:
- 0.0
- 0.0
- 0.0
x2304:
- 0.0
- 0.0
- 0.0
x2305:
- 0.0
- 0.0
- 0.0
x2306:
- 0.0
- 0.0
- 0.0
x2307:
- 0.0
- 0.0
- 0.0
x2308:
- 0.0
- 0.0
- 0.0
x2309:
- 0.0
- 0.0
- 0.0
x231:
- 0.0
- 0.0
- 0.0
x2310:
- 0.0
- 0.0
- 0.0
x2311:
- 0.0
- 0.0
- 0.0
x2312:
- 0.0
- 0.0
- 0.0
x2313:
- 0.0
- 0.0
- 0.0
x2314:
- 0.0
- 0.0
- 0.0
x2315:
- 0.0
- 0.0
- 0.0
x2316:
- 0.0
- 0.0
- 0.0
x2317:
- 0.0
- 0.0
- 0.0
x2318:
- 0.0
- 0.0
- 0.0
x2319:
- 0.0
- 0.0
- 0.0
x232:
- 0.0
- 0.0
- 0.0
x2320:
- 0.0
- 0.0
- 0.0
x2321:
- 0.0
- 0.0
- 0.0
x2322:
- 0.0
- 0.0
- 0.0
x2323:
- 0.0
- 0.0
- 0.0
x2324:
- 0.0
- 0.0
- 0.0
x2325:
- 0.0
- 0.0
- 0.0
x2326:
- 0.0
- 0.0
- 0.0
x2327:
- 0.0
- 0.0
- 0.0
x2328:
- 0.0
- 0.0
- 0.0
x2329:
- 0.0
- 0.0
- 0.0
x233:
- 0.0
- 0.0
- 0.0
x2330:
- 0.0
- 0.0
- 0.0
x2331:
- 0.0
- 0.0
- 0.0
x2332:
- 0.0
- 0.0
- 0.0
x2333:
- 0.0
- 0.0
- 0.0
x2334:
- 0.0
- 0.0
- 0.0
x2335:
- 0.0
- 0.0
- 0.0
x2336:
- 0.0
- 0.0
- 0.0
x2337:
- 0.0
- 0.0
- 0.0
x2338:
- 0.0
- 0.0
- 0.0
x2339:
- 0.0
- 0.0
- 0.0
x234:
- 0.0
- 0.0
- 0.0
x2340:
- 0.0
- 0.0
- 0.0
x2341:
- 0.0
- 0.0
- 0.0
x2342:
- 0.0
- 0.0
- 0.0
x2343:
- 0.0
- 0.0
- 0.0
x2344:
- 0.0
- 0.0
- 0.0
x2345:
- 0.0
- 0.0
- 0.0
x2346:
- 0.0
- 0.0
- 0.0
x2347:
- 0.0
- 0.0
- 0.0
x2348:
- 0.0
- 0.0
- 0.0
x2349:
- 0.0
- 0.0
- 0.0
x235:
- 0.0
- 0.0
- 0.0
x2350:
- 0.0
- 0.0
- 0.0
x2351:
- 0.0
- 0.0
- 0.0
x2352:
- 0.0
- 0.0
- 0.0
x2353:
- 0.0
- 0.0
- 0.0
x2354:
- 0.0
- 0.0
- 0.0
x2355:
- 0.0
- 0.0
- 0.0
x2356:
- 0.0
- 0.0
- 0.0
x2357:
- 0.0
- 0.0
- 0.0
x2358:
- 0.0
- 0.0
- 0.0
x2359:
- 0.0
- 0.0
- 0.0
x236:
- 0.0
- 0.0
- 0.0
x2360:
- 0.0
- 0.0
- 0.0
x2361:
- 0.0
- 0.0
- 0.0
x2362:
- 0.0
- 0.0
- 0.0
x2363:
- 0.0
- 0.0
- 0.0
x2364:
- 0.0
- 0.0
- 0.0
x2365:
- 0.0
- 0.0
- 0.0
x2366:
- 0.0
- 0.0
- 0.0
x2367:
- 0.0
- 0.0
- 0.0
x2368:
- 0.0
- 0.0
- 0.0
x2369:
- 0.0
- 0.0
- 0.0
x237:
- 0.0
- 0.0
- 0.0
x2370:
- 0.0
- 0.0
- 0.0
x2371:
- 0.0
- 0.0
- 0.0
x2372:
- 0.0
- 0.0
- 0.0
x2373:
- 0.0
- 0.0
- 0.0
x2374:
- 0.0
- 0.0
- 0.0
x2375:
- 0.0
- 0.0
- 0.0
x2376:
- 0.0
- 0.0
- 0.0
x2377:
- 0.0
- 0.0
- 0.0
x2378:
- 0.0
- 0.0
- 0.0
x2379:
- 0.0
- 0.0
- 0.0
x238:
- 0.0
- 0.0
- 0.0
x2380:
- 0.0
- 0.0
- 0.0
x2381:
- 0.0
- 0.0
- 0.0
x2382:
- 0.0
- 0.0
- 0.0
x2383:
- 0.0
- 0.0
- 0.0
x2384:
- 0.0
- 0.0
- 0.0
x2385:
- 0.0
- 0.0
- 0.0
x2386:
- 0.0
- 0.0
- 0.0
x2387:
- 0.0
- 0.0
- 0.0
x2388:
- 0.0
- 0.0
- 0.0
x2389:
- 0.0
- 0.0
- 0.0
x239:
- 0.0
- 0.0
- 0.0
x2390:
- 0.0
- 0.0
- 0.0
x2391:
- 0.0
- 0.0
- 0.0
x2392:
- 0.0
- 0.0
- 0.0
x2393:
- 0.0
- 0.0
- 0.0
x2394:
- 0.0
- 0.0
- 0.0
x2395:
- 0.0
- 0.0
- 0.0
x2396:
- 0.0
- 0.0
- 0.0
x2397:
- 0.0
- 0.0
- 0.0
x2398:
- 0.0
- 0.0
- 0.0
x2399:
- 0.0
- 0.0
- 0.0
x24:
- 0.0
- 0.0
- 0.0
x240:
- 0.0
- 0.0
- 0.0
x2400:
- 0.0
- 0.0
- 0.0
x2401:
- 0.0
- 0.0
- 0.0
x2402:
- 0.0
- 0.0
- 0.0
x2403:
- 0.0
- 0.0
- 0.0
x2404:
- 0.0
- 0.0
- 0.0
x2405:
- 0.0
- 0.0
- 0.0
x2406:
- 0.0
- 0.0
- 0.0
x2407:
- 0.0
- 0.0
- 0.0
x2408:
- 0.0
- 0.0
- 0.0
x2409:
- 0.0
- 0.0
- 0.0
x241:
- 0.0
- 0.0
- 0.0
x2410:
- 0.0
- 0.0
- 0.0
x2411:
- 0.0
- 0.0
- 0.0
x2412:
- 0.0
- 0.0
- 0.0
x2413:
- 0.0
- 0.0
- 0.0
x2414:
- 0.0
- 0.0
- 0.0
x2415:
- 0.0
- 0.0
- 0.0
x2416:
- 0.0
- 0.0
- 0.0
x2417:
- 0.0
- 0.0
- 0.0
x2418:
- 0.0
- 0.0
- 0.0
x2419:
- 0.0
- 0.0
- 0.0
x242:
- 0.0
- 0.0
- 0.0
x2420:
- 0.0
- 0.0
- 0.0
x2421:
- 0.0
- 0.0
- 0.0
x2422:
- 0.0
- 0.0
- 0.0
x2423:
- 0.0
- 0.0
- 0.0
x2424:
- 0.0
- 0.0
- 0.0
x2425:
- 0.0
- 0.0
- 0.0
x2426:
- 0.0
- 0.0
- 0.0
x2427:
- 0.0
- 0.0
- 0.0
x2428:
- 0.0
- 0.0
- 0.0
x2429:
- 0.0
- 0.0
- 0.0
x243:
- 0.0
- 0.0
- 0.0
x2430:
- 0.0
- 0.0
- 0.0
x2431:
- 0.0
- 0.0
- 0.0
x2432:
- 0.0
- 0.0
- 0.0
x2433:
- 0.0
- 0.0
- 0.0
x2434:
- 0.0
- 0.0
- 0.0
x2435:
- 0.0
- 0.0
- 0.0
x2436:
- 0.0
- 0.0
- 0.0
x2437:
- 0.0
- 0.0
- 0.0
x2438:
- 0.0
- 0.0
- 0.0
x2439:
- 0.0
- 0.0
- 0.0
x244:
- 0.0
- 0.0
- 0.0
x2440:
- 0.0
- 0.0
- 0.0
x2441:
- 0.0
- 0.0
- 0.0
x2442:
- 0.0
- 0.0
- 0.0
x2443:
- 0.0
- 0.0
- 0.0
x2444:
- 0.0
- 0.0
- 0.0
x2445:
- 0.0
- 0.0
- 0.0
x2446:
- 0.0
- 0.0
- 0.0
x2447:
- 0.0
- 0.0
- 0.0
x2448:
- 0.0
- 0.0
- 0.0
x2449:
- 0.0
- 0.0
- 0.0
x245:
- 0.0
- 0.0
- 0.0
x2450:
- 0.0
- 0.0
- 0.0
x2451:
- 0.0
- 0.0
- 0.0
x2452:
- 0.0
- 0.0
- 0.0
x2453:
- 0.0
- 0.0
- 0.0
x2454:
- 0.0
- 0.0
- 0.0
x2455:
- 0.0
- 0.0
- 0.0
x2456:
- 0.0
- 0.0
- 0.0
x2457:
- 0.0
- 0.0
- 0.0
x2458:
- 0.0
- 0.0
- 0.0
x2459:
- 0.0
- 0.0
- 0.0
x246:
- 0.0
- 0.0
- 0.0
x2460:
- 0.0
- 0.0
- 0.0
x2461:
- 0.0
- 0.0
- 0.0
x2462:
- 0.0
- 0.0
- 0.0
x2463:
- 0.0
- 0.0
- 0.0
x2464:
- 0.0
- 0.0
- 0.0
x2465:
- 0.0
- 0.0
- 0.0
x2466:
- 0.0
- 0.0
- 0.0
x2467:
- 0.0
- 0.0
- 0.0
x2468:
- 0.0
- 0.0
- 0.0
x2469:
- 0.0
- 0.0
- 0.0
x247:
- 0.0
- 0.0
- 0.0
x2470:
- 0.0
- 0.0
- 0.0
x2471:
- 0.0
- 0.0
- 0.0
x2472:
- 0.0
- 0.0
- 0.0
x2473:
- 0.0
- 0.0
- 0.0
x2474:
- 0.0
- 0.0
- 0.0
x2475:
- 0.0
- 0.0
- 0.0
x2476:
- 0.0
- 0.0
- 0.0
x2477:
- 0.0
- 0.0
- 0.0
x2478:
- 0.0
- 0.0
- 0.0
x2479:
- 0.0
- 0.0
- 0.0
x248:
- 0.0
- 0.0
- 0.0
x2480:
- 0.0
- 0.0
- 0.0
x2481:
- 0.0
- 0.0
- 0.0
x2482:
- 0.0
- 0.0
- 0.0
x2483:
- 0.0
- 0.0
- 0.0
x2484:
- 0.0
- 0.0
- 0.0
x2485:
- 0.0
- 0.0
- 0.0
x2486:
- 0.0
- 0.0
- 0.0
x2487:
- 0.0
- 0.0
- 0.0
x2488:
- 0.0
- 0.0
- 0.0
x2489:
- 0.0
- 0.0
- 0.0
x249:
- 0.0
- 0.0
- 0.0
x2490:
- 0.0
- 0.0
- 0.0
x2491:
- 0.0
- 0.0
- 0.0
x2492:
- 0.0
- 0.0
- 0.0
x2493:
- 0.0
- 0.0
- 0.0
x2494:
- 0.0
- 0.0
- 0.0
x2495:
- 0.0
- 0.0
- 0.0
x2496:
- 0.0
- 0.0
- 0.0
x2497:
- 0.0
- 0.0
- 0.0
x2498:
- 0.0
- 0.0
- 0.0
x2499:
- 0.0
- 0.0
- 0.0
x25:
- 0.0
- 0.0
- 0.0
x250:
- 0.0
- 0.0
- 0.0
x2500:
- 0.0
- 0.0
- 0.0
x2501:
- 0.0
- 0.0
- 0.0
x2502:
- 0.0
- 0.0
- 0.0
x2503:
- 0.0
- 0.0
- 0.0
x2504:
- 0.0
- 0.0
- 0.0
x2505:
- 0.0
- 0.0
- 0.0
x2506:
- 0.0
- 0.0
- 0.0
x2507:
- 0.0
- 0.0
- 0.0
x2508:
- 0.0
- 0.0
- 0.0
x2509:
- 0.0
- 0.0
- 0.0
x251:
- 0.0
- 0.0
- 0.0
x2510:
- 0.0
- 0.0
- 0.0
x2511:
- 0.0
- 0.0
- 0.0
x2512:
- 0.0
- 0.0
- 0.0
x2513:
- 0.0
- 0.0
- 0.0
x2514:
- 0.0
- 0.0
- 0.0
x2515:
- 0.0
- 0.0
- 0.0
x2516:
- 0.0
- 0.0
- 0.0
x2517:
- 0.0
- 0.0
- 0.0
x2518:
- 0.0
- 0.0
- 0.0
x2519:
- 0.0
- 0.0
- 0.0
x252:
- 0.0
- 0.0
- 0.0
x2520:
- 0.0
- 0.0
- 0.0
x2521:
- 0.0
- 0.0
- 0.0
x2522:
- 0.0
- 0.0
- 0.0
x2523:
- 0.0
- 0.0
- 0.0
x2524:
- 0.0
- 0.0
- 0.0
x2525:
- 0.0
- 0.0
- 0.0
x2526:
- 0.0
- 0.0
- 0.0
x2527:
- 0.0
- 0.0
- 0.0
x2528:
- 0.0
- 0.0
- 0.0
x2529:
- 0.0
- 0.0
- 0.0
x253:
- 0.0
- 0.0
- 0.0
x2530:
- 0.0
- 0.0
- 0.0
x2531:
- 0.0
- 0.0
- 0.0
x2532:
- 0.0
- 0.0
- 0.0
x2533:
- 0.0
- 0.0
- 0.0
x2534:
- 0.0
- 0.0
- 0.0
x2535:
- 0.0
- 0.0
- 0.0
x2536:
- 0.0
- 0.0
- 0.0
x2537:
- 0.0
- 0.0
- 0.0
x2538:
- 0.0
- 0.0
- 0.0
x2539:
- 0.0
- 0.0
- 0.0
x254:
- 0.0
- 0.0
- 0.0
x2540:
- 0.0
- 0.0
- 0.0
x2541:
- 0.0
- 0.0
- 0.0
x2542:
- 0.0
- 0.0
- 0.0
x2543:
- 0.0
- 0.0
- 0.0
x2544:
- 0.0
- 0.0
- 0.0
x2545:
- 0.0
- 0.0
- 0.0
x2546:
- 0.0
- 0.0
- 0.0
x2547:
- 0.0
- 0.0
- 0.0
x2548:
- 0.0
- 0.0
- 0.0
x2549:
- 0.0
- 0.0
- 0.0
x255:
- 0.0
- 0.0
- 0.0
x2550:
- 0.0
- 0.0
- 0.0
x2551:
- 0.0
- 0.0
- 0.0
x2552:
- 0.0
- 0.0
- 0.0
x2553:
- 0.0
- 0.0
- 0.0
x2554:
- 0.0
- 0.0
- 0.0
x2555:
- 0.0
- 0.0
- 0.0
x2556:
- 0.0
- 0.0
- 0.0
x2557:
- 0.0
- 0.0
- 0.0
x2558:
- 0.0
- 0.0
- 0.0
x2559:
- 0.0
- 0.0
- 0.0
x256:
- 0.0
- 0.0
- 0.0
x2560:
- 0.0
- 0.0
- 0.0
x2561:
- 0.0
- 0.0
- 0.0
x2562:
- 0.0
- 0.0
- 0.0
x2563:
- 0.0
- 0.0
- 0.0
x2564:
- 0.0
- 0.0
- 0.0
x2565:
- 0.0
- 0.0
- 0.0
x2566:
- 0.0
- 0.0
- 0.0
x2567:
- 0.0
- 0.0
- 0.0
x2568:
- 0.0
- 0.0
- 0.0
x2569:
- 0.0
- 0.0
- 0.0
x257:
- 0.0
- 0.0
- 0.0
x2570:
- 0.0
- 0.0
- 0.0
x2571:
- 0.0
- 0.0
- 0.0
x2572:
- 0.0
- 0.0
- 0.0
x2573:
- 0.0
- 0.0
- 0.0
x2574:
- 0.0
- 0.0
- 0.0
x2575:
- 0.0
- 0.0
- 0.0
x2576:
- 0.0
- 0.0
- 0.0
x2577:
- 0.0
- 0.0
- 0.0
x2578:
- 0.0
- 0.0
- 0.0
x2579:
- 0.0
- 0.0
- 0.0
x258:
- 0.0
- 0.0
- 0.0
x2580:
- 0.0
- 0.0
- 0.0
x2581:
- 0.0
- 0.0
- 0.0
x2582:
- 0.0
- 0.0
- 0.0
x2583:
- 0.0
- 0.0
- 0.0
x2584:
- 0.0
- 0.0
- 0.0
x2585:
- 0.0
- 0.0
- 0.0
x2586:
- 0.0
- 0.0
- 0.0
x2587:
- 0.0
- 0.0
- 0.0
x2588:
- 0.0
- 0.0
- 0.0
x2589:
- 0.0
- 0.0
- 0.0
x259:
- 0.0
- 0.0
- 0.0
x2590:
- 0.0
- 0.0
- 0.0
x2591:
- 0.0
- 0.0
- 0.0
x2592:
- 0.0
- 0.0
- 0.0
x2593:
- 0.0
- 0.0
- 0.0
x2594:
- 0.0
- 0.0
- 0.0
x2595:
- 0.0
- 0.0
- 0.0
x2596:
- 0.0
- 0.0
- 0.0
x2597:
- 0.0
- 0.0
- 0.0
x2598:
- 0.0
- 0.0
- 0.0
x2599:
- 0.0
- 0.0
- 0.0
x26:
- 0.0
- 0.0
- 0.0
x260:
- 0.0
- 0.0
- 0.0
x2600:
- 0.0
- 0.0
- 0.0
x2601:
- 0.0
- 0.0
- 0.0
x2602:
- 0.0
- 0.0
- 0.0
x2603:
- 0.0
- 0.0
- 0.0
x2604:
- 0.0
- 0.0
- 0.0
x2605:
- 0.0
- 0.0
- 0.0
x2606:
- 0.0
- 0.0
- 0.0
x2607:
- 0.0
- 0.0
- 0.0
x2608:
- 0.0
- 0.0
- 0.0
x2609:
- 0.0
- 0.0
- 0.0
x261:
- 0.0
- 0.0
- 0.0
x2610:
- 0.0
- 0.0
- 0.0
x2611:
- 0.0
- 0.0
- 0.0
x2612:
- 0.0
- 0.0
- 0.0
x2613:
- 0.0
- 0.0
- 0.0
x2614:
- 0.0
- 0.0
- 0.0
x2615:
- 0.0
- 0.0
- 0.0
x2616:
- 0.0
- 0.0
- 0.0
x2617:
- 0.0
- 0.0
- 0.0
x2618:
- 0.0
- 0.0
- 0.0
x2619:
- 0.0
- 0.0
- 0.0
x262:
- 0.0
- 0.0
- 0.0
x2620:
- 0.0
- 0.0
- 0.0
x2621:
- 0.0
- 0.0
- 0.0
x2622:
- 0.0
- 0.0
- 0.0
x2623:
- 0.0
- 0.0
- 0.0
x2624:
- 0.0
- 0.0
- 0.0
x2625:
- 0.0
- 0.0
- 0.0
x2626:
- 0.0
- 0.0
- 0.0
x2627:
- 0.0
- 0.0
- 0.0
x2628:
- 0.0
- 0.0
- 0.0
x2629:
- 0.0
- 0.0
- 0.0
x263:
- 0.0
- 0.0
- 0.0
x2630:
- 0.0
- 0.0
- 0.0
x2631:
- 0.0
- 0.0
- 0.0
x2632:
- 0.0
- 0.0
- 0.0
x2633:
- 0.0
- 0.0
- 0.0
x2634:
- 0.0
- 0.0
- 0.0
x2635:
- 0.0
- 0.0
- 0.0
x2636:
- 0.0
- 0.0
- 0.0
x2637:
- 0.0
- 0.0
- 0.0
x2638:
- 0.0
- 0.0
- 0.0
x2639:
- 0.0
- 0.0
- 0.0
x264:
- 0.0
- 0.0
- 0.0
x2640:
- 0.0
- 0.0
- 0.0
x2641:
- 0.0
- 0.0
- 0.0
x2642:
- 0.0
- 0.0
- 0.0
x2643:
- 0.0
- 0.0
- 0.0
x2644:
- 0.0
- 0.0
- 0.0
x2645:
- 0.0
- 0.0
- 0.0
x2646:
- 0.0
- 0.0
- 0.0
x2647:
- 0.0
- 0.0
- 0.0
x2648:
- 0.0
- 0.0
- 0.0
x2649:
- 0.0
- 0.0
- 0.0
x265:
- 0.0
- 0.0
- 0.0
x2650:
- 0.0
- 0.0
- 0.0
x2651:
- 0.0
- 0.0
- 0.0
x2652:
- 0.0
- 0.0
- 0.0
x2653:
- 0.0
- 0.0
- 0.0
x2654:
- 0.0
- 0.0
- 0.0
x2655:
- 0.0
- 0.0
- 0.0
x2656:
- 0.0
- 0.0
- 0.0
x2657:
- 0.0
- 0.0
- 0.0
x2658:
- 0.0
- 0.0
- 0.0
x2659:
- 0.0
- 0.0
- 0.0
x266:
- 0.0
- 0.0
- 0.0
x2660:
- 0.0
- 0.0
- 0.0
x2661:
- 0.0
- 0.0
- 0.0
x2662:
- 0.0
- 0.0
- 0.0
x2663:
- 0.0
- 0.0
- 0.0
x2664:
- 0.0
- 0.0
- 0.0
x2665:
- 0.0
- 0.0
- 0.0
x2666:
- 0.0
- 0.0
- 0.0
x2667:
- 0.0
- 0.0
- 0.0
x2668:
- 0.0
- 0.0
- 0.0
x2669:
- 0.0
- 0.0
- 0.0
x267:
- 0.0
- 0.0
- 0.0
x2670:
- 0.0
- 0.0
- 0.0
x2671:
- 0.0
- 0.0
- 0.0
x2672:
- 0.0
- 0.0
- 0.0
x2673:
- 0.0
- 0.0
- 0.0
x2674:
- 0.0
- 0.0
- 0.0
x2675:
- 0.0
- 0.0
- 0.0
x2676:
- 0.0
- 0.0
- 0.0
x2677:
- 0.0
- 0.0
- 0.0
x2678:
- 0.0
- 0.0
- 0.0
x2679:
- 0.0
- 0.0
- 0.0
x268:
- 0.0
- 0.0
- 0.0
x2680:
- 0.0
- 0.0
- 0.0
x2681:
- 0.0
- 0.0
- 0.0
x2682:
- 0.0
- 0.0
- 0.0
x2683:
- 0.0
- 0.0
- 0.0
x2684:
- 0.0
- 0.0
- 0.0
x2685:
- 0.0
- 0.0
- 0.0
x2686:
- 0.0
- 0.0
- 0.0
x2687:
- 0.0
- 0.0
- 0.0
x2688:
- 0.0
- 0.0
- 0.0
x2689:
- 0.0
- 0.0
- 0.0
x269:
- 0.0
- 0.0
- 0.0
x2690:
- 0.0
- 0.0
- 0.0
x2691:
- 0.0
- 0.0
- 0.0
x2692:
- 0.0
- 0.0
- 0.0
x2693:
- 0.0
- 0.0
- 0.0
x2694:
- 0.0
- 0.0
- 0.0
x2695:
- 0.0
- 0.0
- 0.0
x2696:
- 0.0
- 0.0
- 0.0
x2697:
- 0.0
- 0.0
- 0.0
x2698:
- 0.0
- 0.0
- 0.0
x2699:
- 0.0
- 0.0
- 0.0
x27:
- 0.0
- 0.0
- 0.0
x270:
- 0.0
- 0.0
- 0.0
x2700:
- 0.0
- 0.0
- 0.0
x2701:
- 0.0
- 0.0
- 0.0
x2702:
- 0.0
- 0.0
- 0.0
x2703:
- 0.0
- 0.0
- 0.0
x2704:
- 0.0
- 0.0
- 0.0
x2705:
- 0.0
- 0.0
- 0.0
x2706:
- 0.0
- 0.0
- 0.0
x2707:
- 0.0
- 0.0
- 0.0
x2708:
- 0.0
- 0.0
- 0.0
x2709:
- 0.0
- 0.0
- 0.0
x271:
- 0.0
- 0.0
- 0.0
x2710:
- 0.0
- 0.0
- 0.0
x2711:
- 0.0
- 0.0
- 0.0
x2712:
- 0.0
- 0.0
- 0.0
x2713:
- 0.0
- 0.0
- 0.0
x2714:
- 0.0
- 0.0
- 0.0
x2715:
- 0.0
- 0.0
- 0.0
x2716:
- 0.0
- 0.0
- 0.0
x2717:
- 0.0
- 0.0
- 0.0
x2718:
- 0.0
- 0.0
- 0.0
x2719:
- 0.0
- 0.0
- 0.0
x272:
- 0.0
- 0.0
- 0.0
x2720:
- 0.0
- 0.0
- 0.0
x2721:
- 0.0
- 0.0
- 0.0
x2722:
- 0.0
- 0.0
- 0.0
x2723:
- 0.0
- 0.0
- 0.0
x2724:
- 0.0
- 0.0
- 0.0
x2725:
- 0.0
- 0.0
- 0.0
x2726:
- 0.0
- 0.0
- 0.0
x2727:
- 0.0
- 0.0
- 0.0
x2728:
- 0.0
- 0.0
- 0.0
x2729:
- 0.0
- 0.0
- 0.0
x273:
- 0.0
- 0.0
- 0.0
x2730:
- 0.0
- 0.0
- 0.0
x2731:
- 0.0
- 0.0
- 0.0
x2732:
- 0.0
- 0.0
- 0.0
x2733:
- 0.0
- 0.0
- 0.0
x2734:
- 0.0
- 0.0
- 0.0
x2735:
- 0.0
- 0.0
- 0.0
x2736:
- 0.0
- 0.0
- 0.0
x2737:
- 0.0
- 0.0
- 0.0
x2738:
- 0.0
- 0.0
- 0.0
x2739:
- 0.0
- 0.0
- 0.0
x274:
- 0.0
- 0.0
- 0.0
x2740:
- 0.0
- 0.0
- 0.0
x2741:
- 0.0
- 0.0
- 0.0
x2742:
- 0.0
- 0.0
- 0.0
x2743:
- 0.0
- 0.0
- 0.0
x2744:
- 0.0
- 0.0
- 0.0
x2745:
- 0.0
- 0.0
- 0.0
x2746:
- 0.0
- 0.0
- 0.0
x2747:
- 0.0
- 0.0
- 0.0
x2748:
- 0.0
- 0.0
- 0.0
x2749:
- 0.0
- 0.0
- 0.0
x275:
- 0.0
- 0.0
- 0.0
x2750:
- 0.0
- 0.0
- 0.0
x2751:
- 0.0
- 0.0
- 0.0
x2752:
- 0.0
- 0.0
- 0.0
x2753:
- 0.0
- 0.0
- 0.0
x2754:
- 0.0
- 0.0
- 0.0
x2755:
- 0.0
- 0.0
- 0.0
x2756:
- 0.0
- 0.0
- 0.0
x2757:
- 0.0
- 0.0
- 0.0
x2758:
- 0.0
- 0.0
- 0.0
x2759:
- 0.0
- 0.0
- 0.0
x276:
- 0.0
- 0.0
- 0.0
x2760:
- 0.0
- 0.0
- 0.0
x2761:
- 0.0
- 0.0
- 0.0
x2762:
- 0.0
- 0.0
- 0.0
x2763:
- 0.0
- 0.0
- 0.0
x2764:
- 0.0
- 0.0
- 0.0
x2765:
- 0.0
- 0.0
- 0.0
x2766:
- 0.0
- 0.0
- 0.0
x2767:
- 0.0
- 0.0
- 0.0
x2768:
- 0.0
- 0.0
- 0.0
x2769:
- 0.0
- 0.0
- 0.0
x277:
- 0.0
- 0.0
- 0.0
x2770:
- 0.0
- 0.0
- 0.0
x2771:
- 0.0
- 0.0
- 0.0
x2772:
- 0.0
- 0.0
- 0.0
x2773:
- 0.0
- 0.0
- 0.0
x2774:
- 0.0
- 0.0
- 0.0
x2775:
- 0.0
- 0.0
- 0.0
x2776:
- 0.0
- 0.0
- 0.0
x2777:
- 0.0
- 0.0
- 0.0
x2778:
- 0.0
- 0.0
- 0.0
x2779:
- 0.0
- 0.0
- 0.0
x278:
- 0.0
- 0.0
- 0.0
x2780:
- 0.0
- 0.0
- 0.0
x2781:
- 0.0
- 0.0
- 0.0
x2782:
- 0.0
- 0.0
- 0.0
x2783:
- 0.0
- 0.0
- 0.0
x2784:
- 0.0
- 0.0
- 0.0
x2785:
- 0.0
- 0.0
- 0.0
x2786:
- 0.0
- 0.0
- 0.0
x2787:
- 0.0
- 0.0
- 0.0
x2788:
- 0.0
- 0.0
- 0.0
x2789:
- 0.0
- 0.0
- 0.0
x279:
- 0.0
- 0.0
- 0.0
x2790:
- 0.0
- 0.0
- 0.0
x2791:
- 0.0
- 0.0
- 0.0
x2792:
- 0.0
- 0.0
- 0.0
x2793:
- 0.0
- 0.0
- 0.0
x2794:
- 0.0
- 0.0
- 0.0
x2795:
- 0.0
- 0.0
- 0.0
x2796:
- 0.0
- 0.0
- 0.0
x2797:
- 0.0
- 0.0
- 0.0
x2798:
- 0.0
- 0.0
- 0.0
x2799:
- 0.0
- 0.0
- 0.0
x28:
- 0.0
- 0.0
- 0.0
x280:
- 0.0
- 0.0
- 0.0
x2800:
- 0.0
- 0.0
- 0.0
x2801:
- 0.0
- 0.0
- 0.0
x2802:
- 0.0
- 0.0
- 0.0
x2803:
- 0.0
- 0.0
- 0.0
x2804:
- 0.0
- 0.0
- 0.0
x2805:
- 0.0
- 0.0
- 0.0
x2806:
- 0.0
- 0.0
- 0.0
x2807:
- 0.0
- 0.0
- 0.0
x2808:
- 0.0
- 0.0
- 0.0
x2809:
- 0.0
- 0.0
- 0.0
x281:
- 0.0
- 0.0
- 0.0
x2810:
- 0.0
- 0.0
- 0.0
x2811:
- 0.0
- 0.0
- 0.0
x2812:
- 0.0
- 0.0
- 0.0
x2813:
- 0.0
- 0.0
- 0.0
x2814:
- 0.0
- 0.0
- 0.0
x2815:
- 0.0
- 0.0
- 0.0
x2816:
- 0.0
- 0.0
- 0.0
x2817:
- 0.0
- 0.0
- 0.0
x2818:
- 0.0
- 0.0
- 0.0
x2819:
- 0.0
- 0.0
- 0.0
x282:
- 0.0
- 0.0
- 0.0
x2820:
- 0.0
- 0.0
- 0.0
x2821:
- 0.0
- 0.0
- 0.0
x2822:
- 0.0
- 0.0
- 0.0
x2823:
- 0.0
- 0.0
- 0.0
x2824:
- 0.0
- 0.0
- 0.0
x2825:
- 0.0
- 0.0
- 0.0
x2826:
- 0.0
- 0.0
- 0.0
x2827:
- 0.0
- 0.0
- 0.0
x2828:
- 0.0
- 0.0
- 0.0
x2829:
- 0.0
- 0.0
- 0.0
x283:
- 0.0
- 0.0
- 0.0
x2830:
- 0.0
- 0.0
- 0.0
x2831:
- 0.0
- 0.0
- 0.0
x2832:
- 0.0
- 0.0
- 0.0
x2833:
- 0.0
- 0.0
- 0.0
x2834:
- 0.0
- 0.0
- 0.0
x2835:
- 0.0
- 0.0
- 0.0
x2836:
- 0.0
- 0.0
- 0.0
x2837:
- 0.0
- 0.0
- 0.0
x2838:
- 0.0
- 0.0
- 0.0
x2839:
- 0.0
- 0.0
- 0.0
x284:
- 0.0
- 0.0
- 0.0
x2840:
- 0.0
- 0.0
- 0.0
x2841:
- 0.0
- 0.0
- 0.0
x2842:
- 0.0
- 0.0
- 0.0
x2843:
- 0.0
- 0.0
- 0.0
x2844:
- 0.0
- 0.0
- 0.0
x2845:
- 0.0
- 0.0
- 0.0
x2846:
- 0.0
- 0.0
- 0.0
x2847:
- 0.0
- 0.0
- 0.0
x2848:
- 0.0
- 0.0
- 0.0
x2849:
- 0.0
- 0.0
- 0.0
x285:
- 0.0
- 0.0
- 0.0
x2850:
- 0.0
- 0.0
- 0.0
x2851:
- 0.0
- 0.0
- 0.0
x2852:
- 0.0
- 0.0
- 0.0
x2853:
- 0.0
- 0.0
- 0.0
x2854:
- 0.0
- 0.0
- 0.0
x2855:
- 0.0
- 0.0
- 0.0
x2856:
- 0.0
- 0.0
- 0.0
x2857:
- 0.0
- 0.0
- 0.0
x2858:
- 0.0
- 0.0
- 0.0
x2859:
- 0.0
- 0.0
- 0.0
x286:
- 0.0
- 0.0
- 0.0
x2860:
- 0.0
- 0.0
- 0.0
x2861:
- 0.0
- 0.0
- 0.0
x2862:
- 0.0
- 0.0
- 0.0
x2863:
- 0.0
- 0.0
- 0.0
x2864:
- 0.0
- 0.0
- 0.0
x2865:
- 0.0
- 0.0
- 0.0
x2866:
- 0.0
- 0.0
- 0.0
x2867:
- 0.0
- 0.0
- 0.0
x2868:
- 0.0
- 0.0
- 0.0
x2869:
- 0.0
- 0.0
- 0.0
x287:
- 0.0
- 0.0
- 0.0
x2870:
- 0.0
- 0.0
- 0.0
x2871:
- 0.0
- 0.0
- 0.0
x2872:
- 0.0
- 0.0
- 0.0
x2873:
- 0.0
- 0.0
- 0.0
x2874:
- 0.0
- 0.0
- 0.0
x2875:
- 0.0
- 0.0
- 0.0
x2876:
- 0.0
- 0.0
- 0.0
x2877:
- 0.0
- 0.0
- 0.0
x2878:
- 0.0
- 0.0
- 0.0
x2879:
- 0.0
- 0.0
- 0.0
x288:
- 0.0
- 0.0
- 0.0
x2880:
- 0.0
- 0.0
- 0.0
x2881:
- 0.0
- 0.0
- 0.0
x2882:
- 0.0
- 0.0
- 0.0
x2883:
- 0.0
- 0.0
- 0.0
x2884:
- 0.0
- 0.0
- 0.0
x2885:
- 0.0
- 0.0
- 0.0
x2886:
- 0.0
- 0.0
- 0.0
x2887:
- 0.0
- 0.0
- 0.0
x2888:
- 0.0
- 0.0
- 0.0
x2889:
- 0.0
- 0.0
- 0.0
x289:
- 0.0
- 0.0
- 0.0
x2890:
- 0.0
- 0.0
- 0.0
x2891:
- 0.0
- 0.0
- 0.0
x2892:
- 0.0
- 0.0
- 0.0
x2893:
- 0.0
- 0.0
- 0.0
x2894:
- 0.0
- 0.0
- 0.0
x2895:
- 0.0
- 0.0
- 0.0
x2896:
- 0.0
- 0.0
- 0.0
x2897:
- 0.0
- 0.0
- 0.0
x2898:
- 0.0
- 0.0
- 0.0
x2899:
- 0.0
- 0.0
- 0.0
x29:
- 0.0
- 0.0
- 0.0
x290:
- 0.0
- 0.0
- 0.0
x2900:
- 0.0
- 0.0
- 0.0
x2901:
- 0.0
- 0.0
- 0.0
x2902:
- 0.0
- 0.0
- 0.0
x2903:
- 0.0
- 0.0
- 0.0
x2904:
- 0.0
- 0.0
- 0.0
x2905:
- 0.0
- 0.0
- 0.0
x2906:
- 0.0
- 0.0
- 0.0
x2907:
- 0.0
- 0.0
- 0.0
x2908:
- 0.0
- 0.0
- 0.0
x2909:
- 0.0
- 0.0
- 0.0
x291:
- 0.0
- 0.0
- 0.0
x2910:
- 0.0
- 0.0
- 0.0
x2911:
- 0.0
- 0.0
- 0.0
x2912:
- 0.0
- 0.0
- 0.0
x2913:
- 0.0
- 0.0
- 0.0
x2914:
- 0.0
- 0.0
- 0.0
x2915:
- 0.0
- 0.0
- 0.0
x2916:
- 0.0
- 0.0
- 0.0
x2917:
- 0.0
- 0.0
- 0.0
x2918:
- 0.0
- 0.0
- 0.0
x2919:
- 0.0
- 0.0
- 0.0
x292:
- 0.0
- 0.0
- 0.0
x2920:
- 0.0
- 0.0
- 0.0
x2921:
- 0.0
- 0.0
- 0.0
x2922:
- 0.0
- 0.0
- 0.0
x2923:
- 0.0
- 0.0
- 0.0
x2924:
- 0.0
- 0.0
- 0.0
x2925:
- 0.0
- 0.0
- 0.0
x2926:
- 0.0
- 0.0
- 0.0
x2927:
- 0.0
- 0.0
- 0.0
x2928:
- 0.0
- 0.0
- 0.0
x2929:
- 0.0
- 0.0
- 0.0
x293:
- 0.0
- 0.0
- 0.0
x2930:
- 0.0
- 0.0
- 0.0
x2931:
- 0.0
- 0.0
- 0.0
x2932:
- 0.0
- 0.0
- 0.0
x2933:
- 0.0
- 0.0
- 0.0
x2934:
- 0.0
- 0.0
- 0.0
x2935:
- 0.0
- 0.0
- 0.0
x2936:
- 0.0
- 0.0
- 0.0
x2937:
- 0.0
- 0.0
- 0.0
x2938:
- 0.0
- 0.0
- 0.0
x2939:
- 0.0
- 0.0
- 0.0
x294:
- 0.0
- 0.0
- 0.0
x2940:
- 0.0
- 0.0
- 0.0
x2941:
- 0.0
- 0.0
- 0.0
x2942:
- 0.0
- 0.0
- 0.0
x2943:
- 0.0
- 0.0
- 0.0
x2944:
- 0.0
- 0.0
- 0.0
x2945:
- 0.0
- 0.0
- 0.0
x2946:
- 0.0
- 0.0
- 0.0
x2947:
- 0.0
- 0.0
- 0.0
x2948:
- 0.0
- 0.0
- 0.0
x2949:
- 0.0
- 0.0
- 0.0
x295:
- 0.0
- 0.0
- 0.0
x2950:
- 0.0
- 0.0
- 0.0
x2951:
- 0.0
- 0.0
- 0.0
x2952:
- 0.0
- 0.0
- 0.0
x2953:
- 0.0
- 0.0
- 0.0
x2954:
- 0.0
- 0.0
- 0.0
x2955:
- 0.0
- 0.0
- 0.0
x2956:
- 0.0
- 0.0
- 0.0
x2957:
- 0.0
- 0.0
- 0.0
x2958:
- 0.0
- 0.0
- 0.0
x2959:
- 0.0
- 0.0
- 0.0
x296:
- 0.0
- 0.0
- 0.0
x2960:
- 0.0
- 0.0
- 0.0
x2961:
- 0.0
- 0.0
- 0.0
x2962:
- 0.0
- 0.0
- 0.0
x2963:
- 0.0
- 0.0
- 0.0
x2964:
- 0.0
- 0.0
- 0.0
x2965:
- 0.0
- 0.0
- 0.0
x2966:
- 0.0
- 0.0
- 0.0
x2967:
- 0.0
- 0.0
- 0.0
x2968:
- 0.0
- 0.0
- 0.0
x2969:
- 0.0
- 0.0
- 0.0
x297:
- 0.0
- 0.0
- 0.0
x2970:
- 0.0
- 0.0
- 0.0
x2971:
- 0.0
- 0.0
- 0.0
x2972:
- 0.0
- 0.0
- 0.0
x2973:
- 0.0
- 0.0
- 0.0
x2974:
- 0.0
- 0.0
- 0.0
x2975:
- 0.0
- 0.0
- 0.0
x2976:
- 0.0
- 0.0
- 0.0
x2977:
- 0.0
- 0.0
- 0.0
x2978:
- 0.0
- 0.0
- 0.0
x2979:
- 0.0
- 0.0
- 0.0
x298:
- 0.0
- 0.0
- 0.0
x2980:
- 0.0
- 0.0
- 0.0
x2981:
- 0.0
- 0.0
- 0.0
x2982:
- 0.0
- 0.0
- 0.0
x2983:
- 0.0
- 0.0
- 0.0
x2984:
- 0.0
- 0.0
- 0.0
x2985:
- 0.0
- 0.0
- 0.0
x2986:
- 0.0
- 0.0
- 0.0
x2987:
- 0.0
- 0.0
- 0.0
x2988:
- 0.0
- 0.0
- 0.0
x2989:
- 0.0
- 0.0
- 0.0
x299:
- 0.0
- 0.0
- 0.0
x2990:
- 0.0
- 0.0
- 0.0
x2991:
- 0.0
- 0.0
- 0.0
x2992:
- 0.0
- 0.0
- 0.0
x2993:
- 0.0
- 0.0
- 0.0
x2994:
- 0.0
- 0.0
- 0.0
x2995:
- 0.0
- 0.0
- 0.0
x2996:
- 0.0
- 0.0
- 0.0
x2997:
- 0.0
- 0.0
- 0.0
x2998:
- 0.0
- 0.0
- 0.0
x2999:
- 0.0
- 0.0
- 0.0
x3:
- 0.0
- 0.0
- 0.0
x30:
- 0.0
- 0.0
- 0.0
x300:
- 0.0
- 0.0
- 0.0
x3000:
- 0.0
- 0.0
- 0.0
x3001:
- 0.0
- 0.0
- 0.0
x3002:
- 0.0
- 0.0
- 0.0
x3003:
- 0.0
- 0.0
- 0.0
x3004:
- 0.0
- 0.0
- 0.0
x3005:
- 0.0
- 0.0
- 0.0
x3006:
- 0.0
- 0.0
- 0.0
x3007:
- 0.0
- 0.0
- 0.0
x3008:
- 0.0
- 0.0
- 0.0
x3009:
- 0.0
- 0.0
- 0.0
x301:
- 0.0
- 0.0
- 0.0
x3010:
- 0.0
- 0.0
- 0.0
x3011:
- 0.0
- 0.0
- 0.0
x3012:
- 0.0
- 0.0
- 0.0
x3013:
- 0.0
- 0.0
- 0.0
x3014:
- 0.0
- 0.0
- 0.0
x3015:
- 0.0
- 0.0
- 0.0
x3016:
- 0.0
- 0.0
- 0.0
x3017:
- 0.0
- 0.0
- 0.0
x3018:
- 0.0
- 0.0
- 0.0
x3019:
- 0.0
- 0.0
- 0.0
x302:
- 0.0
- 0.0
- 0.0
x3020:
- 0.0
- 0.0
- 0.0
x3021:
- 0.0
- 0.0
- 0.0
x3022:
- 0.0
- 0.0
- 0.0
x3023:
- 0.0
- 0.0
- 0.0
x3024:
- 0.0
- 0.0
- 0.0
x3025:
- 0.0
- 0.0
- 0.0
x3026:
- 0.0
- 0.0
- 0.0
x3027:
- 0.0
- 0.0
- 0.0
x3028:
- 0.0
- 0.0
- 0.0
x3029:
- 0.0
- 0.0
- 0.0
x303:
- 0.0
- 0.0
- 0.0
x3030:
- 0.0
- 0.0
- 0.0
x3031:
- 0.0
- 0.0
- 0.0
x3032:
- 0.0
- 0.0
- 0.0
x3033:
- 0.0
- 0.0
- 0.0
x3034:
- 0.0
- 0.0
- 0.0
x3035:
- 0.0
- 0.0
- 0.0
x3036:
- 0.0
- 0.0
- 0.0
x3037:
- 0.0
- 0.0
- 0.0
x3038:
- 0.0
- 0.0
- 0.0
x3039:
- 0.0
- 0.0
- 0.0
x304:
- 0.0
- 0.0
- 0.0
x3040:
- 0.0
- 0.0
- 0.0
x3041:
- 0.0
- 0.0
- 0.0
x3042:
- 0.0
- 0.0
- 0.0
x3043:
- 0.0
- 0.0
- 0.0
x3044:
- 0.0
- 0.0
- 0.0
x3045:
- 0.0
- 0.0
- 0.0
x3046:
- 0.0
- 0.0
- 0.0
x3047:
- 0.0
- 0.0
- 0.0
x3048:
- 0.0
- 0.0
- 0.0
x3049:
- 0.0
- 0.0
- 0.0
x305:
- 0.0
- 0.0
- 0.0
x3050:
- 0.0
- 0.0
- 0.0
x3051:
- 0.0
- 0.0
- 0.0
x3052:
- 0.0
- 0.0
- 0.0
x3053:
- 0.0
- 0.0
- 0.0
x3054:
- 0.0
- 0.0
- 0.0
x3055:
- 0.0
- 0.0
- 0.0
x3056:
- 0.0
- 0.0
- 0.0
x3057:
- 0.0
- 0.0
- 0.0
x3058:
- 0.0
- 0.0
- 0.0
x3059:
- 0.0
- 0.0
- 0.0
x306:
- 0.0
- 0.0
- 0.0
x3060:
- 0.0
- 0.0
- 0.0
x3061:
- 0.0
- 0.0
- 0.0
x3062:
- 0.0
- 0.0
- 0.0
x3063:
- 0.0
- 0.0
- 0.0
x3064:
- 0.0
- 0.0
- 0.0
x3065:
- 0.0
- 0.0
- 0.0
x3066:
- 0.0
- 0.0
- 0.0
x3067:
- 0.0
- 0.0
- 0.0
x3068:
- 0.0
- 0.0
- 0.0
x3069:
- 0.0
- 0.0
- 0.0
x307:
- 0.0
- 0.0
- 0.0
x3070:
- 0.0
- 0.0
- 0.0
x3071:
- 0.0
- 0.0
- 0.0
x3072:
- 0.0
- 0.0
- 0.0
x3073:
- 0.0
- 0.0
- 0.0
x3074:
- 0.0
- 0.0
- 0.0
x3075:
- 0.0
- 0.0
- 0.0
x3076:
- 0.0
- 0.0
- 0.0
x3077:
- 0.0
- 0.0
- 0.0
x3078:
- 0.0
- 0.0
- 0.0
x3079:
- 0.0
- 0.0
- 0.0
x308:
- 0.0
- 0.0
- 0.0
x3080:
- 0.0
- 0.0
- 0.0
x3081:
- 0.0
- 0.0
- 0.0
x3082:
- 0.0
- 0.0
- 0.0
x3083:
- 0.0
- 0.0
- 0.0
x3084:
- 0.0
- 0.0
- 0.0
x3085:
- 0.0
- 0.0
- 0.0
x3086:
- 0.0
- 0.0
- 0.0
x3087:
- 0.0
- 0.0
- 0.0
x3088:
- 0.0
- 0.0
- 0.0
x3089:
- 0.0
- 0.0
- 0.0
x309:
- 0.0
- 0.0
- 0.0
x3090:
- 0.0
- 0.0
- 0.0
x3091:
- 0.0
- 0.0
- 0.0
x3092:
- 0.0
- 0.0
- 0.0
x3093:
- 0.0
- 0.0
- 0.0
x3094:
- 0.0
- 0.0
- 0.0
x3095:
- 0.0
- 0.0
- 0.0
x3096:
- 0.0
- 0.0
- 0.0
x3097:
- 0.0
- 0.0
- 0.0
x3098:
- 0.0
- 0.0
- 0.0
x3099:
- 0.0
- 0.0
- 0.0
x31:
- 0.0
- 0.0
- 0.0
x310:
- 0.0
- 0.0
- 0.0
x3100:
- 0.0
- 0.0
- 0.0
x3101:
- 0.0
- 0.0
- 0.0
x3102:
- 0.0
- 0.0
- 0.0
x3103:
- 0.0
- 0.0
- 0.0
x3104:
- 0.0
- 0.0
- 0.0
x3105:
- 0.0
- 0.0
- 0.0
x3106:
- 0.0
- 0.0
- 0.0
x3107:
- 0.0
- 0.0
- 0.0
x3108:
- 0.0
- 0.0
- 0.0
x3109:
- 0.0
- 0.0
- 0.0
x311:
- 0.0
- 0.0
- 0.0
x3110:
- 0.0
- 0.0
- 0.0
x3111:
- 0.0
- 0.0
- 0.0
x3112:
- 0.0
- 0.0
- 0.0
x3113:
- 0.0
- 0.0
- 0.0
x3114:
- 0.0
- 0.0
- 0.0
x3115:
- 0.0
- 0.0
- 0.0
x3116:
- 0.0
- 0.0
- 0.0
x3117:
- 0.0
- 0.0
- 0.0
x3118:
- 0.0
- 0.0
- 0.0
x3119:
- 0.0
- 0.0
- 0.0
x312:
- 0.0
- 0.0
- 0.0
x3120:
- 0.0
- 0.0
- 0.0
x3121:
- 0.0
- 0.0
- 0.0
x3122:
- 0.0
- 0.0
- 0.0
x3123:
- 0.0
- 0.0
- 0.0
x3124:
- 0.0
- 0.0
- 0.0
x3125:
- 0.0
- 0.0
- 0.0
x3126:
- 0.0
- 0.0
- 0.0
x3127:
- 0.0
- 0.0
- 0.0
x3128:
- 0.0
- 0.0
- 0.0
x3129:
- 0.0
- 0.0
- 0.0
x313:
- 0.0
- 0.0
- 0.0
x3130:
- 0.0
- 0.0
- 0.0
x3131:
- 0.0
- 0.0
- 0.0
x3132:
- 0.0
- 0.0
- 0.0
x3133:
- 0.0
- 0.0
- 0.0
x3134:
- 0.0
- 0.0
- 0.0
x3135:
- 0.0
- 0.0
- 0.0
x3136:
- 0.0
- 0.0
- 0.0
x3137:
- 0.0
- 0.0
- 0.0
x3138:
- 0.0
- 0.0
- 0.0
x3139:
- 0.0
- 0.0
- 0.0
x314:
- 0.0
- 0.0
- 0.0
x3140:
- 0.0
- 0.0
- 0.0
x3141:
- 0.0
- 0.0
- 0.0
x3142:
- 0.0
- 0.0
- 0.0
x3143:
- 0.0
- 0.0
- 0.0
x3144:
- 0.0
- 0.0
- 0.0
x3145:
- 0.0
- 0.0
- 0.0
x3146:
- 0.0
- 0.0
- 0.0
x3147:
- 0.0
- 0.0
- 0.0
x3148:
- 0.0
- 0.0
- 0.0
x3149:
- 0.0
- 0.0
- 0.0
x315:
- 0.0
- 0.0
- 0.0
x3150:
- 0.0
- 0.0
- 0.0
x3151:
- 0.0
- 0.0
- 0.0
x3152:
- 0.0
- 0.0
- 0.0
x3153:
- 0.0
- 0.0
- 0.0
x3154:
- 0.0
- 0.0
- 0.0
x3155:
- 0.0
- 0.0
- 0.0
x3156:
- 0.0
- 0.0
- 0.0
x3157:
- 0.0
- 0.0
- 0.0
x3158:
- 0.0
- 0.0
- 0.0
x3159:
- 0.0
- 0.0
- 0.0
x316:
- 0.0
- 0.0
- 0.0
x3160:
- 0.0
- 0.0
- 0.0
x3161:
- 0.0
- 0.0
- 0.0
x3162:
- 0.0
- 0.0
- 0.0
x3163:
- 0.0
- 0.0
- 0.0
x3164:
- 0.0
- 0.0
- 0.0
x3165:
- 0.0
- 0.0
- 0.0
x3166:
- 0.0
- 0.0
- 0.0
x3167:
- 0.0
- 0.0
- 0.0
x3168:
- 0.0
- 0.0
- 0.0
x3169:
- 0.0
- 0.0
- 0.0
x317:
- 0.0
- 0.0
- 0.0
x3170:
- 0.0
- 0.0
- 0.0
x3171:
- 0.0
- 0.0
- 0.0
x3172:
- 0.0
- 0.0
- 0.0
x3173:
- 0.0
- 0.0
- 0.0
x3174:
- 0.0
- 0.0
- 0.0
x3175:
- 0.0
- 0.0
- 0.0
x3176:
- 0.0
- 0.0
- 0.0
x3177:
- 0.0
- 0.0
- 0.0
x3178:
- 0.0
- 0.0
- 0.0
x3179:
- 0.0
- 0.0
- 0.0
x318:
- 0.0
- 0.0
- 0.0
x3180:
- 0.0
- 0.0
- 0.0
x3181:
- 0.0
- 0.0
- 0.0
x3182:
- 0.0
- 0.0
- 0.0
x3183:
- 0.0
- 0.0
- 0.0
x3184:
- 0.0
- 0.0
- 0.0
x3185:
- 0.0
- 0.0
- 0.0
x3186:
- 0.0
- 0.0
- 0.0
x3187:
- 0.0
- 0.0
- 0.0
x3188:
- 0.0
- 0.0
- 0.0
x3189:
- 0.0
- 0.0
- 0.0
x319:
- 0.0
- 0.0
- 0.0
x3190:
- 0.0
- 0.0
- 0.0
x3191:
- 0.0
- 0.0
- 0.0
x3192:
- 0.0
- 0.0
- 0.0
x3193:
- 0.0
- 0.0
- 0.0
x3194:
- 0.0
- 0.0
- 0.0
x3195:
- 0.0
- 0.0
- 0.0
x3196:
- 0.0
- 0.0
- 0.0
x3197:
- 0.0
- 0.0
- 0.0
x3198:
- 0.0
- 0.0
- 0.0
x3199:
- 0.0
- 0.0
- 0.0
x32:
- 0.0
- 0.0
- 0.0
x320:
- 0.0
- 0.0
- 0.0
x3200:
- 0.0
- 0.0
- 0.0
x3201:
- 0.0
- 0.0
- 0.0
x3202:
- 0.0
- 0.0
- 0.0
x3203:
- 0.0
- 0.0
- 0.0
x3204:
- 0.0
- 0.0
- 0.0
x3205:
- 0.0
- 0.0
- 0.0
x3206:
- 0.0
- 0.0
- 0.0
x3207:
- 0.0
- 0.0
- 0.0
x3208:
- 0.0
- 0.0
- 0.0
x3209:
- 0.0
- 0.0
- 0.0
x321:
- 0.0
- 0.0
- 0.0
x3210:
- 0.0
- 0.0
- 0.0
x3211:
- 0.0
- 0.0
- 0.0
x3212:
- 0.0
- 0.0
- 0.0
x3213:
- 0.0
- 0.0
- 0.0
x3214:
- 0.0
- 0.0
- 0.0
x3215:
- 1.0
- 0.0
- 0.0
x3216:
- 0.0
- 0.0
- 0.0
x3217:
- 0.0
- 0.0
- 0.0
x3218:
- 0.0
- 0.0
- 0.0
x3219:
- 0.0
- 0.0
- 0.0
x322:
- 0.0
- 0.0
- 0.0
x3220:
- 0.0
- 0.0
- 0.0
x3221:
- 0.0
- 0.0
- 0.0
x3222:
- 0.0
- 0.0
- 0.0
x3223:
- 0.0
- 0.0
- 0.0
x3224:
- 0.0
- 0.0
- 0.0
x3225:
- 0.0
- 0.0
- 0.0
x3226:
- 0.0
- 0.0
- 0.0
x3227:
- 0.0
- 0.0
- 0.0
x3228:
- 0.0
- 0.0
- 0.0
x3229:
- 0.0
- 0.0
- 0.0
x323:
- 0.0
- 0.0
- 0.0
x3230:
- 0.0
- 0.0
- 0.0
x3231:
- 0.0
- 0.0
- 0.0
x3232:
- 0.0
- 0.0
- 0.0
x3233:
- 0.0
- 0.0
- 0.0
x3234:
- 0.0
- 0.0
- 0.0
x3235:
- 0.0
- 0.0
- 0.0
x3236:
- 0.0
- 0.0
- 0.0
x3237:
- 0.0
- 0.0
- 0.0
x3238:
- 0.0
- 0.0
- 0.0
x3239:
- 0.0
- 0.0
- 0.0
x324:
- 0.0
- 0.0
- 0.0
x3240:
- 0.0
- 0.0
- 0.0
x3241:
- 0.0
- 0.0
- 0.0
x3242:
- 0.0
- 0.0
- 0.0
x3243:
- 0.0
- 0.0
- 0.0
x3244:
- 0.0
- 0.0
- 0.0
x3245:
- 0.0
- 0.0
- 0.0
x3246:
- 0.0
- 0.0
- 0.0
x3247:
- 0.0
- 0.0
- 0.0
x3248:
- 0.0
- 0.0
- 0.0
x3249:
- 0.0
- 0.0
- 0.0
x325:
- 0.0
- 0.0
- 0.0
x3250:
- 0.0
- 0.0
- 0.0
x3251:
- 0.0
- 0.0
- 0.0
x3252:
- 0.0
- 0.0
- 0.0
x3253:
- 0.0
- 0.0
- 0.0
x3254:
- 0.0
- 0.0
- 0.0
x3255:
- 0.0
- 0.0
- 0.0
x3256:
- 0.0
- 0.0
- 0.0
x3257:
- 0.0
- 0.0
- 0.0
x3258:
- 0.0
- 0.0
- 0.0
x3259:
- 0.0
- 0.0
- 0.0
x326:
- 0.0
- 0.0
- 0.0
x3260:
- 0.0
- 0.0
- 0.0
x3261:
- 0.0
- 0.0
- 0.0
x3262:
- 0.0
- 0.0
- 0.0
x3263:
- 0.0
- 0.0
- 0.0
x3264:
- 0.0
- 0.0
- 0.0
x3265:
- 0.0
- 0.0
- 0.0
x3266:
- 0.0
- 0.0
- 0.0
x3267:
- 0.0
- 0.0
- 0.0
x3268:
- 0.0
- 0.0
- 0.0
x3269:
- 0.0
- 0.0
- 0.0
x327:
- 0.0
- 0.0
- 0.0
x3270:
- 0.0
- 0.0
- 0.0
x3271:
- 0.0
- 0.0
- 0.0
x3272:
- 0.0
- 0.0
- 0.0
x3273:
- 0.0
- 0.0
- 0.0
x3274:
- 0.0
- 0.0
- 0.0
x3275:
- 0.0
- 0.0
- 0.0
x3276:
- 0.0
- 0.0
- 0.0
x3277:
- 0.0
- 0.0
- 0.0
x3278:
- 0.0
- 0.0
- 0.0
x3279:
- 0.0
- 0.0
- 0.0
x328:
- 0.0
- 0.0
- 0.0
x3280:
- 0.0
- 0.0
- 0.0
x3281:
- 0.0
- 0.0
- 0.0
x3282:
- 0.0
- 0.0
- 0.0
x3283:
- 0.0
- 0.0
- 0.0
x3284:
- 0.0
- 0.0
- 0.0
x3285:
- 0.0
- 0.0
- 0.0
x3286:
- 0.0
- 0.0
- 0.0
x3287:
- 0.0
- 0.0
- 0.0
x3288:
- 0.0
- 0.0
- 0.0
x3289:
- 0.0
- 0.0
- 0.0
x329:
- 0.0
- 0.0
- 0.0
x3290:
- 0.0
- 0.0
- 0.0
x3291:
- 0.0
- 0.0
- 0.0
x3292:
- 0.0
- 0.0
- 0.0
x3293:
- 0.0
- 0.0
- 0.0
x3294:
- 0.0
- 0.0
- 0.0
x3295:
- 0.0
- 0.0
- 0.0
x3296:
- 0.0
- 0.0
- 0.0
x3297:
- 0.0
- 0.0
- 0.0
x3298:
- 0.0
- 0.0
- 0.0
x3299:
- 0.0
- 0.0
- 0.0
x33:
- 0.0
- 0.0
- 0.0
x330:
- 0.0
- 0.0
- 0.0
x3300:
- 0.0
- 0.0
- 0.0
x3301:
- 0.0
- 0.0
- 0.0
x3302:
- 0.0
- 0.0
- 0.0
x3303:
- 0.0
- 0.0
- 0.0
x3304:
- 0.0
- 0.0
- 0.0
x3305:
- 0.0
- 0.0
- 0.0
x3306:
- 0.0
- 0.0
- 0.0
x3307:
- 0.0
- 0.0
- 0.0
x3308:
- 0.0
- 0.0
- 0.0
x3309:
- 0.0
- 0.0
- 0.0
x331:
- 0.0
- 0.0
- 0.0
x3310:
- 0.0
- 0.0
- 0.0
x3311:
- 0.0
- 0.0
- 0.0
x3312:
- 0.0
- 0.0
- 0.0
x3313:
- 0.0
- 0.0
- 0.0
x3314:
- 0.0
- 0.0
- 0.0
x3315:
- 0.0
- 0.0
- 0.0
x3316:
- 0.0
- 0.0
- 0.0
x3317:
- 0.0
- 0.0
- 0.0
x3318:
- 0.0
- 0.0
- 0.0
x3319:
- 0.0
- 0.0
- 0.0
x332:
- 0.0
- 0.0
- 0.0
x3320:
- 0.0
- 0.0
- 0.0
x3321:
- 0.0
- 0.0
- 0.0
x3322:
- 0.0
- 0.0
- 0.0
x3323:
- 0.0
- 0.0
- 0.0
x3324:
- 0.0
- 0.0
- 0.0
x3325:
- 0.0
- 0.0
- 0.0
x3326:
- 0.0
- 0.0
- 0.0
x3327:
- 0.0
- 0.0
- 0.0
x3328:
- 0.0
- 0.0
- 0.0
x3329:
- 0.0
- 0.0
- 0.0
x333:
- 0.0
- 0.0
- 0.0
x3330:
- 0.0
- 0.0
- 0.0
x3331:
- 0.0
- 0.0
- 0.0
x3332:
- 0.0
- 0.0
- 0.0
x3333:
- 0.0
- 0.0
- 0.0
x3334:
- 0.0
- 0.0
- 0.0
x3335:
- 0.0
- 0.0
- 0.0
x3336:
- 0.0
- 0.0
- 0.0
x3337:
- 0.0
- 0.0
- 0.0
x3338:
- 0.0
- 0.0
- 0.0
x3339:
- 0.0
- 0.0
- 0.0
x334:
- 0.0
- 0.0
- 0.0
x3340:
- 0.0
- 0.0
- 0.0
x3341:
- 0.0
- 0.0
- 0.0
x3342:
- 0.0
- 0.0
- 0.0
x3343:
- 0.0
- 0.0
- 0.0
x3344:
- 0.0
- 0.0
- 0.0
x3345:
- 0.0
- 0.0
- 0.0
x3346:
- 0.0
- 0.0
- 0.0
x3347:
- 0.0
- 0.0
- 0.0
x3348:
- 0.0
- 0.0
- 0.0
x3349:
- 0.0
- 0.0
- 0.0
x335:
- 0.0
- 0.0
- 0.0
x3350:
- 0.0
- 0.0
- 0.0
x3351:
- 0.0
- 0.0
- 0.0
x3352:
- 0.0
- 0.0
- 0.0
x3353:
- 0.0
- 0.0
- 0.0
x3354:
- 0.0
- 0.0
- 0.0
x3355:
- 0.0
- 0.0
- 0.0
x3356:
- 0.0
- 0.0
- 0.0
x3357:
- 0.0
- 0.0
- 0.0
x3358:
- 0.0
- 0.0
- 0.0
x3359:
- 0.0
- 0.0
- 0.0
x336:
- 0.0
- 0.0
- 0.0
x3360:
- 0.0
- 0.0
- 0.0
x3361:
- 0.0
- 0.0
- 0.0
x3362:
- 0.0
- 0.0
- 0.0
x3363:
- 0.0
- 0.0
- 0.0
x3364:
- 0.0
- 0.0
- 0.0
x3365:
- 0.0
- 0.0
- 0.0
x3366:
- 0.0
- 0.0
- 0.0
x3367:
- 0.0
- 0.0
- 0.0
x3368:
- 0.0
- 0.0
- 0.0
x3369:
- 0.0
- 0.0
- 0.0
x337:
- 0.0
- 0.0
- 0.0
x3370:
- 0.0
- 0.0
- 0.0
x3371:
- 0.0
- 0.0
- 0.0
x3372:
- 0.0
- 0.0
- 0.0
x3373:
- 0.0
- 0.0
- 0.0
x3374:
- 0.0
- 0.0
- 0.0
x3375:
- 0.0
- 0.0
- 0.0
x3376:
- 0.0
- 0.0
- 0.0
x3377:
- 0.0
- 0.0
- 0.0
x3378:
- 0.0
- 0.0
- 0.0
x3379:
- 0.0
- 0.0
- 0.0
x338:
- 0.0
- 0.0
- 0.0
x3380:
- 0.0
- 0.0
- 0.0
x3381:
- 0.0
- 0.0
- 0.0
x3382:
- 0.0
- 0.0
- 0.0
x3383:
- 0.0
- 0.0
- 0.0
x3384:
- 0.0
- 0.0
- 0.0
x3385:
- 0.0
- 0.0
- 0.0
x3386:
- 0.0
- 0.0
- 0.0
x3387:
- 0.0
- 0.0
- 0.0
x3388:
- 0.0
- 0.0
- 0.0
x3389:
- 0.0
- 0.0
- 0.0
x339:
- 0.0
- 0.0
- 0.0
x3390:
- 0.0
- 0.0
- 0.0
x3391:
- 0.0
- 0.0
- 0.0
x3392:
- 0.0
- 0.0
- 0.0
x3393:
- 0.0
- 0.0
- 0.0
x3394:
- 0.0
- 0.0
- 0.0
x3395:
- 0.0
- 0.0
- 0.0
x3396:
- 0.0
- 0.0
- 0.0
x3397:
- 0.0
- 0.0
- 0.0
x3398:
- 0.0
- 0.0
- 0.0
x3399:
- 0.0
- 0.0
- 0.0
x34:
- 0.0
- 0.0
- 0.0
x340:
- 0.0
- 0.0
- 0.0
x3400:
- 0.0
- 0.0
- 0.0
x3401:
- 0.0
- 0.0
- 0.0
x3402:
- 0.0
- 0.0
- 0.0
x3403:
- 0.0
- 0.0
- 0.0
x3404:
- 0.0
- 0.0
- 0.0
x3405:
- 0.0
- 0.0
- 0.0
x3406:
- 0.0
- 0.0
- 0.0
x3407:
- 0.0
- 0.0
- 0.0
x3408:
- 0.0
- 0.0
- 0.0
x3409:
- 0.0
- 0.0
- 0.0
x341:
- 0.0
- 0.0
- 0.0
x3410:
- 0.0
- 0.0
- 0.0
x3411:
- 0.0
- 0.0
- 0.0
x3412:
- 0.0
- 0.0
- 0.0
x3413:
- 0.0
- 0.0
- 0.0
x3414:
- 0.0
- 0.0
- 0.0
x3415:
- 0.0
- 0.0
- 0.0
x3416:
- 0.0
- 0.0
- 0.0
x3417:
- 0.0
- 0.0
- 0.0
x3418:
- 0.0
- 0.0
- 0.0
x3419:
- 0.0
- 0.0
- 0.0
x342:
- 0.0
- 0.0
- 0.0
x3420:
- 0.0
- 0.0
- 0.0
x3421:
- 0.0
- 0.0
- 0.0
x3422:
- 0.0
- 0.0
- 0.0
x3423:
- 0.0
- 0.0
- 0.0
x3424:
- 0.0
- 0.0
- 0.0
x3425:
- 0.0
- 0.0
- 0.0
x3426:
- 0.0
- 0.0
- 0.0
x3427:
- 0.0
- 0.0
- 0.0
x3428:
- 0.0
- 0.0
- 0.0
x3429:
- 0.0
- 0.0
- 0.0
x343:
- 0.0
- 0.0
- 0.0
x3430:
- 0.0
- 0.0
- 0.0
x3431:
- 0.0
- 0.0
- 0.0
x3432:
- 0.0
- 0.0
- 0.0
x3433:
- 0.0
- 0.0
- 0.0
x3434:
- 0.0
- 0.0
- 0.0
x3435:
- 0.0
- 0.0
- 0.0
x3436:
- 0.0
- 0.0
- 0.0
x3437:
- 0.0
- 0.0
- 0.0
x3438:
- 0.0
- 0.0
- 0.0
x3439:
- 0.0
- 0.0
- 0.0
x344:
- 0.0
- 0.0
- 0.0
x3440:
- 0.0
- 0.0
- 0.0
x3441:
- 0.0
- 0.0
- 0.0
x3442:
- 0.0
- 0.0
- 0.0
x3443:
- 0.0
- 0.0
- 0.0
x3444:
- 0.0
- 0.0
- 0.0
x3445:
- 0.0
- 0.0
- 0.0
x3446:
- 0.0
- 0.0
- 0.0
x3447:
- 0.0
- 0.0
- 0.0
x3448:
- 0.0
- 0.0
- 0.0
x3449:
- 0.0
- 0.0
- 0.0
x345:
- 0.0
- 0.0
- 0.0
x3450:
- 0.0
- 0.0
- 0.0
x3451:
- 0.0
- 0.0
- 0.0
x3452:
- 0.0
- 0.0
- 0.0
x3453:
- 0.0
- 0.0
- 0.0
x3454:
- 0.0
- 0.0
- 0.0
x3455:
- 0.0
- 0.0
- 0.0
x3456:
- 0.0
- 0.0
- 0.0
x3457:
- 0.0
- 0.0
- 0.0
x3458:
- 0.0
- 0.0
- 0.0
x3459:
- 0.0
- 0.0
- 0.0
x346:
- 0.0
- 0.0
- 0.0
x3460:
- 0.0
- 0.0
- 0.0
x3461:
- 0.0
- 0.0
- 0.0
x3462:
- 0.0
- 0.0
- 0.0
x3463:
- 0.0
- 0.0
- 0.0
x3464:
- 0.0
- 0.0
- 0.0
x3465:
- 0.0
- 0.0
- 0.0
x3466:
- 0.0
- 0.0
- 0.0
x3467:
- 0.0
- 0.0
- 0.0
x3468:
- 0.0
- 0.0
- 0.0
x3469:
- 0.0
- 0.0
- 0.0
x347:
- 0.0
- 0.0
- 0.0
x3470:
- 0.0
- 0.0
- 0.0
x3471:
- 0.0
- 0.0
- 0.0
x3472:
- 0.0
- 0.0
- 0.0
x3473:
- 0.0
- 0.0
- 0.0
x3474:
- 0.0
- 0.0
- 0.0
x3475:
- 0.0
- 0.0
- 0.0
x3476:
- 0.0
- 0.0
- 0.0
x3477:
- 0.0
- 0.0
- 0.0
x3478:
- 0.0
- 0.0
- 0.0
x3479:
- 0.0
- 0.0
- 0.0
x348:
- 0.0
- 0.0
- 0.0
x3480:
- 0.0
- 0.0
- 0.0
x3481:
- 0.0
- 0.0
- 0.0
x3482:
- 0.0
- 0.0
- 0.0
x3483:
- 0.0
- 0.0
- 0.0
x3484:
- 0.0
- 0.0
- 0.0
x3485:
- 0.0
- 0.0
- 0.0
x3486:
- 0.0
- 0.0
- 0.0
x3487:
- 0.0
- 0.0
- 0.0
x3488:
- 0.0
- 0.0
- 0.0
x3489:
- 0.0
- 0.0
- 0.0
x349:
- 0.0
- 0.0
- 0.0
x3490:
- 0.0
- 0.0
- 0.0
x3491:
- 0.0
- 0.0
- 0.0
x3492:
- 0.0
- 0.0
- 0.0
x3493:
- 0.0
- 0.0
- 0.0
x3494:
- 0.0
- 0.0
- 0.0
x3495:
- 0.0
- 0.0
- 0.0
x3496:
- 0.0
- 0.0
- 0.0
x3497:
- 0.0
- 0.0
- 0.0
x3498:
- 0.0
- 0.0
- 0.0
x3499:
- 0.0
- 0.0
- 0.0
x35:
- 0.0
- 0.0
- 0.0
x350:
- 0.0
- 0.0
- 0.0
x3500:
- 0.0
- 0.0
- 0.0
x3501:
- 0.0
- 0.0
- 0.0
x3502:
- 0.0
- 0.0
- 0.0
x3503:
- 0.0
- 0.0
- 0.0
x3504:
- 0.0
- 0.0
- 0.0
x3505:
- 0.0
- 0.0
- 0.0
x3506:
- 0.0
- 0.0
- 0.0
x3507:
- 0.0
- 0.0
- 0.0
x3508:
- 0.0
- 0.0
- 0.0
x3509:
- 0.0
- 0.0
- 0.0
x351:
- 0.0
- 0.0
- 0.0
x3510:
- 0.0
- 0.0
- 0.0
x3511:
- 0.0
- 0.0
- 0.0
x3512:
- 0.0
- 0.0
- 0.0
x3513:
- 0.0
- 1.0
- 0.0
x3514:
- 0.0
- 0.0
- 0.0
x3515:
- 0.0
- 0.0
- 0.0
x3516:
- 0.0
- 0.0
- 0.0
x3517:
- 0.0
- 0.0
- 0.0
x3518:
- 0.0
- 0.0
- 0.0
x3519:
- 0.0
- 0.0
- 0.0
x352:
- 0.0
- 0.0
- 0.0
x3520:
- 0.0
- 0.0
- 0.0
x3521:
- 0.0
- 0.0
- 0.0
x3522:
- 0.0
- 0.0
- 0.0
x3523:
- 0.0
- 0.0
- 0.0
x3524:
- 0.0
- 0.0
- 0.0
x3525:
- 0.0
- 0.0
- 0.0
x3526:
- 0.0
- 0.0
- 0.0
x3527:
- 0.0
- 0.0
- 0.0
x3528:
- 0.0
- 0.0
- 0.0
x3529:
- 0.0
- 0.0
- 0.0
x353:
- 0.0
- 0.0
- 0.0
x3530:
- 0.0
- 0.0
- 0.0
x3531:
- 0.0
- 0.0
- 0.0
x3532:
- 0.0
- 0.0
- 0.0
x3533:
- 0.0
- 0.0
- 0.0
x3534:
- 0.0
- 0.0
- 0.0
x3535:
- 0.0
- 0.0
- 0.0
x3536:
- 0.0
- 0.0
- 0.0
x3537:
- 0.0
- 0.0
- 0.0
x3538:
- 0.0
- 0.0
- 0.0
x3539:
- 0.0
- 0.0
- 0.0
x354:
- 0.0
- 0.0
- 0.0
x3540:
- 0.0
- 0.0
- 0.0
x3541:
- 0.0
- 0.0
- 0.0
x3542:
- 0.0
- 0.0
- 0.0
x3543:
- 0.0
- 0.0
- 0.0
x3544:
- 0.0
- 0.0
- 0.0
x3545:
- 0.0
- 0.0
- 0.0
x3546:
- 0.0
- 0.0
- 0.0
x3547:
- 0.0
- 0.0
- 0.0
x3548:
- 0.0
- 0.0
- 0.0
x3549:
- 0.0
- 0.0
- 0.0
x355:
- 0.0
- 0.0
- 0.0
x3550:
- 0.0
- 0.0
- 0.0
x3551:
- 0.0
- 0.0
- 0.0
x3552:
- 0.0
- 0.0
- 0.0
x3553:
- 0.0
- 0.0
- 0.0
x3554:
- 0.0
- 0.0
- 0.0
x3555:
- 0.0
- 0.0
- 0.0
x3556:
- 0.0
- 0.0
- 0.0
x3557:
- 0.0
- 0.0
- 0.0
x3558:
- 0.0
- 0.0
- 0.0
x3559:
- 0.0
- 0.0
- 0.0
x356:
- 0.0
- 0.0
- 0.0
x3560:
- 0.0
- 0.0
- 0.0
x3561:
- 0.0
- 0.0
- 0.0
x3562:
- 0.0
- 0.0
- 0.0
x3563:
- 0.0
- 0.0
- 0.0
x3564:
- 0.0
- 0.0
- 0.0
x3565:
- 0.0
- 0.0
- 0.0
x3566:
- 0.0
- 0.0
- 0.0
x3567:
- 0.0
- 0.0
- 0.0
x3568:
- 0.0
- 0.0
- 0.0
x3569:
- 0.0
- 0.0
- 0.0
x357:
- 0.0
- 0.0
- 0.0
x3570:
- 0.0
- 0.0
- 0.0
x3571:
- 0.0
- 0.0
- 0.0
x3572:
- 0.0
- 0.0
- 0.0
x3573:
- 0.0
- 0.0
- 0.0
x3574:
- 0.0
- 0.0
- 0.0
x3575:
- 0.0
- 0.0
- 0.0
x3576:
- 0.0
- 0.0
- 0.0
x3577:
- 0.0
- 0.0
- 0.0
x3578:
- 0.0
- 0.0
- 0.0
x3579:
- 0.0
- 0.0
- 0.0
x358:
- 0.0
- 0.0
- 0.0
x3580:
- 0.0
- 0.0
- 0.0
x3581:
- 0.0
- 0.0
- 0.0
x3582:
- 0.0
- 0.0
- 0.0
x3583:
- 0.0
- 0.0
- 0.0
x3584:
- 0.0
- 0.0
- 0.0
x3585:
- 0.0
- 0.0
- 0.0
x3586:
- 0.0
- 0.0
- 0.0
x3587:
- 0.0
- 0.0
- 0.0
x3588:
- 0.0
- 0.0
- 0.0
x3589:
- 0.0
- 0.0
- 0.0
x359:
- 0.0
- 0.0
- 0.0
x3590:
- 0.0
- 0.0
- 0.0
x3591:
- 0.0
- 0.0
- 0.0
x3592:
- 0.0
- 0.0
- 0.0
x3593:
- 0.0
- 0.0
- 0.0
x3594:
- 0.0
- 0.0
- 0.0
x3595:
- 0.0
- 0.0
- 0.0
x3596:
- 0.0
- 0.0
- 0.0
x3597:
- 0.0
- 0.0
- 0.0
x3598:
- 0.0
- 0.0
- 0.0
x3599:
- 0.0
- 0.0
- 0.0
x36:
- 0.0
- 0.0
- 0.0
x360:
- 0.0
- 0.0
- 0.0
x3600:
- 0.0
- 0.0
- 0.0
x3601:
- 0.0
- 0.0
- 0.0
x3602:
- 0.0
- 0.0
- 0.0
x3603:
- 0.0
- 0.0
- 0.0
x3604:
- 0.0
- 0.0
- 0.0
x3605:
- 0.0
- 0.0
- 0.0
x3606:
- 0.0
- 0.0
- 0.0
x3607:
- 0.0
- 0.0
- 0.0
x3608:
- 0.0
- 0.0
- 0.0
x3609:
- 0.0
- 0.0
- 0.0
x361:
- 0.0
- 0.0
- 0.0
x3610:
- 0.0
- 0.0
- 0.0
x3611:
- 0.0
- 0.0
- 0.0
x3612:
- 0.0
- 0.0
- 0.0
x3613:
- 0.0
- 0.0
- 0.0
x3614:
- 0.0
- 0.0
- 0.0
x3615:
- 0.0
- 0.0
- 0.0
x3616:
- 0.0
- 0.0
- 0.0
x3617:
- 0.0
- 0.0
- 0.0
x3618:
- 0.0
- 0.0
- 0.0
x3619:
- 0.0
- 0.0
- 0.0
x362:
- 0.0
- 0.0
- 0.0
x3620:
- 0.0
- 0.0
- 0.0
x3621:
- 0.0
- 0.0
- 0.0
x3622:
- 0.0
- 0.0
- 0.0
x3623:
- 0.0
- 0.0
- 0.0
x3624:
- 0.0
- 0.0
- 0.0
x3625:
- 0.0
- 0.0
- 0.0
x3626:
- 0.0
- 0.0
- 0.0
x3627:
- 0.0
- 0.0
- 0.0
x3628:
- 0.0
- 0.0
- 0.0
x3629:
- 0.0
- 0.0
- 0.0
x363:
- 0.0
- 0.0
- 0.0
x3630:
- 0.0
- 0.0
- 0.0
x3631:
- 0.0
- 0.0
- 0.0
x3632:
- 0.0
- 0.0
- 0.0
x3633:
- 0.0
- 0.0
- 0.0
x3634:
- 0.0
- 0.0
- 0.0
x3635:
- 0.0
- 0.0
- 0.0
x3636:
- 0.0
- 0.0
- 0.0
x3637:
- 0.0
- 0.0
- 0.0
x3638:
- 0.0
- 0.0
- 0.0
x3639:
- 0.0
- 0.0
- 0.0
x364:
- 0.0
- 0.0
- 0.0
x3640:
- 0.0
- 0.0
- 0.0
x3641:
- 0.0
- 0.0
- 0.0
x3642:
- 0.0
- 0.0
- 0.0
x3643:
- 0.0
- 0.0
- 0.0
x3644:
- 0.0
- 0.0
- 0.0
x3645:
- 0.0
- 0.0
- 0.0
x3646:
- 0.0
- 0.0
- 0.0
x3647:
- 0.0
- 0.0
- 0.0
x3648:
- 0.0
- 0.0
- 0.0
x3649:
- 0.0
- 0.0
- 0.0
x365:
- 0.0
- 0.0
- 0.0
x3650:
- 0.0
- 0.0
- 0.0
x3651:
- 0.0
- 0.0
- 0.0
x3652:
- 0.0
- 0.0
- 0.0
x3653:
- 0.0
- 0.0
- 0.0
x3654:
- 0.0
- 0.0
- 0.0
x3655:
- 0.0
- 0.0
- 0.0
x3656:
- 0.0
- 0.0
- 0.0
x3657:
- 0.0
- 0.0
- 0.0
x3658:
- 0.0
- 0.0
- 0.0
x3659:
- 0.0
- 0.0
- 0.0
x366:
- 0.0
- 0.0
- 0.0
x3660:
- 0.0
- 0.0
- 0.0
x3661:
- 0.0
- 0.0
- 0.0
x3662:
- 0.0
- 0.0
- 0.0
x3663:
- 0.0
- 0.0
- 0.0
x3664:
- 0.0
- 0.0
- 0.0
x3665:
- 0.0
- 0.0
- 0.0
x3666:
- 0.0
- 0.0
- 0.0
x3667:
- 0.0
- 0.0
- 0.0
x3668:
- 0.0
- 0.0
- 0.0
x3669:
- 0.0
- 0.0
- 0.0
x367:
- 0.0
- 0.0
- 0.0
x3670:
- 0.0
- 0.0
- 0.0
x3671:
- 0.0
- 0.0
- 0.0
x3672:
- 0.0
- 0.0
- 0.0
x3673:
- 0.0
- 0.0
- 0.0
x3674:
- 0.0
- 0.0
- 0.0
x3675:
- 0.0
- 0.0
- 0.0
x3676:
- 0.0
- 0.0
- 0.0
x3677:
- 0.0
- 0.0
- 0.0
x3678:
- 0.0
- 0.0
- 0.0
x3679:
- 0.0
- 0.0
- 0.0
x368:
- 0.0
- 0.0
- 0.0
x3680:
- 0.0
- 0.0
- 0.0
x3681:
- 0.0
- 0.0
- 0.0
x3682:
- 0.0
- 0.0
- 0.0
x3683:
- 0.0
- 0.0
- 0.0
x3684:
- 0.0
- 0.0
- 0.0
x3685:
- 0.0
- 0.0
- 0.0
x3686:
- 0.0
- 0.0
- 0.0
x3687:
- 0.0
- 0.0
- 0.0
x3688:
- 0.0
- 0.0
- 0.0
x3689:
- 0.0
- 0.0
- 0.0
x369:
- 0.0
- 0.0
- 0.0
x3690:
- 0.0
- 0.0
- 0.0
x3691:
- 0.0
- 0.0
- 0.0
x3692:
- 0.0
- 0.0
- 0.0
x3693:
- 0.0
- 0.0
- 0.0
x3694:
- 0.0
- 0.0
- 0.0
x3695:
- 0.0
- 0.0
- 0.0
x3696:
- 0.0
- 0.0
- 0.0
x3697:
- 0.0
- 0.0
- 0.0
x3698:
- 0.0
- 0.0
- 0.0
x3699:
- 0.0
- 0.0
- 0.0
x37:
- 0.0
- 0.0
- 0.0
x370:
- 0.0
- 0.0
- 0.0
x3700:
- 0.0
- 0.0
- 0.0
x3701:
- 0.0
- 0.0
- 0.0
x3702:
- 0.0
- 0.0
- 0.0
x3703:
- 0.0
- 0.0
- 0.0
x3704:
- 0.0
- 0.0
- 0.0
x3705:
- 0.0
- 0.0
- 0.0
x3706:
- 0.0
- 0.0
- 0.0
x3707:
- 0.0
- 0.0
- 0.0
x3708:
- 0.0
- 0.0
- 0.0
x3709:
- 0.0
- 0.0
- 0.0
x371:
- 0.0
- 0.0
- 0.0
x3710:
- 0.0
- 0.0
- 0.0
x3711:
- 0.0
- 0.0
- 0.0
x3712:
- 0.0
- 0.0
- 0.0
x3713:
- 0.0
- 0.0
- 0.0
x3714:
- 0.0
- 0.0
- 0.0
x3715:
- 0.0
- 0.0
- 0.0
x3716:
- 0.0
- 0.0
- 0.0
x3717:
- 0.0
- 0.0
- 0.0
x3718:
- 0.0
- 0.0
- 0.0
x3719:
- 0.0
- 0.0
- 0.0
x372:
- 0.0
- 0.0
- 0.0
x3720:
- 0.0
- 0.0
- 0.0
x3721:
- 0.0
- 0.0
- 0.0
x3722:
- 0.0
- 0.0
- 0.0
x3723:
- 0.0
- 0.0
- 0.0
x3724:
- 0.0
- 0.0
- 0.0
x3725:
- 0.0
- 0.0
- 0.0
x3726:
- 0.0
- 0.0
- 0.0
x3727:
- 0.0
- 0.0
- 0.0
x3728:
- 0.0
- 0.0
- 0.0
x3729:
- 0.0
- 0.0
- 0.0
x373:
- 0.0
- 0.0
- 0.0
x3730:
- 0.0
- 0.0
- 0.0
x3731:
- 0.0
- 0.0
- 0.0
x3732:
- 0.0
- 0.0
- 0.0
x3733:
- 0.0
- 0.0
- 0.0
x3734:
- 0.0
- 0.0
- 0.0
x3735:
- 0.0
- 0.0
- 0.0
x3736:
- 0.0
- 0.0
- 0.0
x3737:
- 0.0
- 0.0
- 0.0
x3738:
- 0.0
- 0.0
- 0.0
x3739:
- 0.0
- 0.0
- 0.0
x374:
- 0.0
- 0.0
- 0.0
x3740:
- 0.0
- 0.0
- 0.0
x3741:
- 0.0
- 0.0
- 0.0
x3742:
- 0.0
- 0.0
- 0.0
x3743:
- 0.0
- 0.0
- 0.0
x3744:
- 0.0
- 0.0
- 0.0
x3745:
- 0.0
- 0.0
- 0.0
x3746:
- 0.0
- 0.0
- 0.0
x3747:
- 0.0
- 0.0
- 0.0
x3748:
- 0.0
- 0.0
- 0.0
x3749:
- 0.0
- 0.0
- 0.0
x375:
- 0.0
- 0.0
- 0.0
x3750:
- 0.0
- 0.0
- 0.0
x3751:
- 0.0
- 0.0
- 0.0
x3752:
- 0.0
- 0.0
- 0.0
x3753:
- 0.0
- 0.0
- 0.0
x3754:
- 0.0
- 0.0
- 0.0
x3755:
- 0.0
- 0.0
- 0.0
x3756:
- 0.0
- 0.0
- 0.0
x3757:
- 0.0
- 0.0
- 0.0
x3758:
- 0.0
- 0.0
- 0.0
x3759:
- 0.0
- 0.0
- 0.0
x376:
- 0.0
- 0.0
- 0.0
x3760:
- 0.0
- 0.0
- 0.0
x3761:
- 0.0
- 0.0
- 0.0
x3762:
- 0.0
- 0.0
- 0.0
x3763:
- 0.0
- 0.0
- 0.0
x3764:
- 0.0
- 0.0
- 0.0
x3765:
- 0.0
- 0.0
- 0.0
x3766:
- 0.0
- 0.0
- 0.0
x3767:
- 0.0
- 0.0
- 0.0
x3768:
- 0.0
- 0.0
- 0.0
x3769:
- 0.0
- 0.0
- 0.0
x377:
- 0.0
- 0.0
- 0.0
x3770:
- 0.0
- 0.0
- 0.0
x3771:
- 0.0
- 0.0
- 0.0
x3772:
- 0.0
- 0.0
- 0.0
x3773:
- 0.0
- 0.0
- 0.0
x3774:
- 0.0
- 0.0
- 0.0
x3775:
- 0.0
- 0.0
- 0.0
x3776:
- 0.0
- 0.0
- 0.0
x3777:
- 0.0
- 0.0
- 0.0
x3778:
- 0.0
- 0.0
- 0.0
x3779:
- 0.0
- 0.0
- 0.0
x378:
- 0.0
- 0.0
- 0.0
x3780:
- 0.0
- 0.0
- 0.0
x3781:
- 0.0
- 0.0
- 0.0
x3782:
- 0.0
- 0.0
- 0.0
x3783:
- 0.0
- 0.0
- 0.0
x3784:
- 0.0
- 0.0
- 0.0
x3785:
- 0.0
- 0.0
- 0.0
x3786:
- 0.0
- 0.0
- 0.0
x3787:
- 0.0
- 0.0
- 0.0
x3788:
- 0.0
- 0.0
- 0.0
x3789:
- 0.0
- 0.0
- 0.0
x379:
- 0.0
- 0.0
- 0.0
x3790:
- 0.0
- 0.0
- 0.0
x3791:
- 0.0
- 0.0
- 0.0
x3792:
- 0.0
- 0.0
- 0.0
x3793:
- 0.0
- 0.0
- 0.0
x3794:
- 0.0
- 0.0
- 0.0
x3795:
- 0.0
- 0.0
- 0.0
x3796:
- 0.0
- 0.0
- 0.0
x3797:
- 0.0
- 0.0
- 0.0
x3798:
- 0.0
- 0.0
- 0.0
x3799:
- 0.0
- 0.0
- 0.0
x38:
- 0.0
- 0.0
- 0.0
x380:
- 0.0
- 0.0
- 0.0
x3800:
- 0.0
- 0.0
- 0.0
x3801:
- 0.0
- 0.0
- 0.0
x3802:
- 0.0
- 0.0
- 0.0
x3803:
- 0.0
- 0.0
- 0.0
x3804:
- 0.0
- 0.0
- 0.0
x3805:
- 0.0
- 0.0
- 0.0
x3806:
- 0.0
- 0.0
- 0.0
x3807:
- 0.0
- 0.0
- 0.0
x3808:
- 0.0
- 0.0
- 0.0
x3809:
- 0.0
- 0.0
- 0.0
x381:
- 0.0
- 0.0
- 0.0
x3810:
- 0.0
- 0.0
- 0.0
x3811:
- 0.0
- 0.0
- 0.0
x3812:
- 0.0
- 0.0
- 0.0
x3813:
- 0.0
- 0.0
- 0.0
x3814:
- 0.0
- 0.0
- 0.0
x3815:
- 0.0
- 0.0
- 0.0
x3816:
- 0.0
- 0.0
- 0.0
x3817:
- 0.0
- 0.0
- 0.0
x3818:
- 0.0
- 0.0
- 0.0
x3819:
- 0.0
- 0.0
- 0.0
x382:
- 0.0
- 0.0
- 0.0
x3820:
- 0.0
- 0.0
- 0.0
x3821:
- 0.0
- 0.0
- 0.0
x3822:
- 0.0
- 0.0
- 0.0
x3823:
- 0.0
- 0.0
- 0.0
x3824:
- 0.0
- 0.0
- 0.0
x3825:
- 0.0
- 0.0
- 0.0
x3826:
- 0.0
- 0.0
- 0.0
x3827:
- 0.0
- 0.0
- 0.0
x3828:
- 0.0
- 0.0
- 0.0
x3829:
- 0.0
- 0.0
- 0.0
x383:
- 0.0
- 0.0
- 0.0
x3830:
- 0.0
- 0.0
- 0.0
x3831:
- 0.0
- 0.0
- 0.0
x3832:
- 0.0
- 0.0
- 0.0
x3833:
- 0.0
- 0.0
- 0.0
x3834:
- 0.0
- 0.0
- 0.0
x3835:
- 0.0
- 0.0
- 0.0
x3836:
- 0.0
- 0.0
- 0.0
x3837:
- 0.0
- 0.0
- 0.0
x3838:
- 0.0
- 0.0
- 0.0
x3839:
- 0.0
- 0.0
- 0.0
x384:
- 0.0
- 0.0
- 0.0
x3840:
- 0.0
- 0.0
- 0.0
x3841:
- 0.0
- 0.0
- 0.0
x3842:
- 0.0
- 0.0
- 0.0
x3843:
- 0.0
- 0.0
- 0.0
x3844:
- 0.0
- 0.0
- 0.0
x3845:
- 0.0
- 0.0
- 0.0
x3846:
- 0.0
- 0.0
- 0.0
x3847:
- 0.0
- 0.0
- 0.0
x3848:
- 0.0
- 0.0
- 0.0
x3849:
- 0.0
- 0.0
- 0.0
x385:
- 0.0
- 0.0
- 0.0
x3850:
- 0.0
- 0.0
- 0.0
x3851:
- 0.0
- 0.0
- 0.0
x3852:
- 0.0
- 0.0
- 0.0
x3853:
- 0.0
- 0.0
- 0.0
x3854:
- 0.0
- 0.0
- 0.0
x3855:
- 0.0
- 0.0
- 0.0
x3856:
- 0.0
- 0.0
- 0.0
x3857:
- 0.0
- 0.0
- 0.0
x3858:
- 0.0
- 0.0
- 0.0
x3859:
- 0.0
- 0.0
- 0.0
x386:
- 0.0
- 0.0
- 0.0
x3860:
- 0.0
- 0.0
- 0.0
x3861:
- 0.0
- 0.0
- 0.0
x3862:
- 0.0
- 0.0
- 0.0
x3863:
- 0.0
- 0.0
- 0.0
x3864:
- 0.0
- 0.0
- 0.0
x3865:
- 0.0
- 0.0
- 0.0
x3866:
- 0.0
- 0.0
- 0.0
x3867:
- 0.0
- 0.0
- 0.0
x3868:
- 0.0
- 0.0
- 0.0
x3869:
- 0.0
- 0.0
- 0.0
x387:
- 0.0
- 0.0
- 0.0
x3870:
- 0.0
- 0.0
- 0.0
x3871:
- 0.0
- 0.0
- 0.0
x3872:
- 0.0
- 0.0
- 0.0
x3873:
- 0.0
- 0.0
- 0.0
x3874:
- 0.0
- 0.0
- 0.0
x3875:
- 0.0
- 0.0
- 0.0
x3876:
- 0.0
- 0.0
- 0.0
x3877:
- 0.0
- 0.0
- 0.0
x3878:
- 0.0
- 0.0
- 0.0
x3879:
- 0.0
- 0.0
- 0.0
x388:
- 0.0
- 0.0
- 0.0
x3880:
- 0.0
- 0.0
- 0.0
x3881:
- 0.0
- 0.0
- 0.0
x3882:
- 0.0
- 0.0
- 0.0
x3883:
- 0.0
- 0.0
- 0.0
x3884:
- 0.0
- 0.0
- 0.0
x3885:
- 0.0
- 0.0
- 0.0
x3886:
- 0.0
- 0.0
- 0.0
x3887:
- 0.0
- 0.0
- 0.0
x3888:
- 0.0
- 0.0
- 0.0
x3889:
- 0.0
- 0.0
- 0.0
x389:
- 0.0
- 0.0
- 0.0
x3890:
- 0.0
- 0.0
- 0.0
x3891:
- 0.0
- 0.0
- 0.0
x3892:
- 0.0
- 0.0
- 0.0
x3893:
- 0.0
- 0.0
- 0.0
x3894:
- 0.0
- 0.0
- 0.0
x3895:
- 0.0
- 0.0
- 0.0
x3896:
- 0.0
- 0.0
- 0.0
x3897:
- 0.0
- 0.0
- 0.0
x3898:
- 0.0
- 0.0
- 0.0
x3899:
- 0.0
- 0.0
- 0.0
x39:
- 0.0
- 0.0
- 0.0
x390:
- 0.0
- 0.0
- 0.0
x3900:
- 0.0
- 0.0
- 0.0
x3901:
- 0.0
- 0.0
- 0.0
x3902:
- 0.0
- 0.0
- 0.0
x3903:
- 0.0
- 0.0
- 0.0
x3904:
- 0.0
- 0.0
- 0.0
x3905:
- 0.0
- 0.0
- 0.0
x3906:
- 0.0
- 0.0
- 0.0
x3907:
- 0.0
- 0.0
- 0.0
x3908:
- 0.0
- 0.0
- 0.0
x3909:
- 0.0
- 0.0
- 0.0
x391:
- 0.0
- 0.0
- 0.0
x3910:
- 0.0
- 0.0
- 0.0
x3911:
- 0.0
- 0.0
- 0.0
x3912:
- 0.0
- 0.0
- 0.0
x3913:
- 0.0
- 0.0
- 0.0
x3914:
- 0.0
- 0.0
- 0.0
x3915:
- 0.0
- 0.0
- 0.0
x3916:
- 0.0
- 0.0
- 0.0
x3917:
- 0.0
- 0.0
- 0.0
x3918:
- 0.0
- 0.0
- 0.0
x3919:
- 0.0
- 0.0
- 0.0
x392:
- 0.0
- 0.0
- 0.0
x3920:
- 0.0
- 0.0
- 0.0
x3921:
- 0.0
- 0.0
- 0.0
x3922:
- 0.0
- 0.0
- 0.0
x3923:
- 0.0
- 0.0
- 0.0
x3924:
- 0.0
- 0.0
- 0.0
x3925:
- 0.0
- 0.0
- 0.0
x3926:
- 0.0
- 0.0
- 0.0
x3927:
- 0.0
- 0.0
- 0.0
x3928:
- 0.0
- 0.0
- 0.0
x3929:
- 0.0
- 0.0
- 0.0
x393:
- 0.0
- 0.0
- 0.0
x3930:
- 0.0
- 0.0
- 0.0
x3931:
- 0.0
- 0.0
- 0.0
x3932:
- 0.0
- 0.0
- 0.0
x3933:
- 0.0
- 0.0
- 0.0
x3934:
- 0.0
- 0.0
- 0.0
x3935:
- 0.0
- 0.0
- 0.0
x3936:
- 0.0
- 0.0
- 0.0
x3937:
- 0.0
- 0.0
- 0.0
x3938:
- 0.0
- 0.0
- 0.0
x3939:
- 0.0
- 0.0
- 0.0
x394:
- 0.0
- 0.0
- 0.0
x3940:
- 0.0
- 0.0
- 0.0
x3941:
- 0.0
- 0.0
- 0.0
x3942:
- 0.0
- 0.0
- 0.0
x3943:
- 0.0
- 0.0
- 0.0
x3944:
- 0.0
- 0.0
- 0.0
x3945:
- 0.0
- 0.0
- 0.0
x3946:
- 0.0
- 0.0
- 0.0
x3947:
- 0.0
- 0.0
- 0.0
x3948:
- 0.0
- 0.0
- 0.0
x3949:
- 0.0
- 0.0
- 0.0
x395:
- 0.0
- 0.0
- 0.0
x3950:
- 0.0
- 0.0
- 0.0
x3951:
- 0.0
- 0.0
- 0.0
x3952:
- 0.0
- 0.0
- 0.0
x3953:
- 0.0
- 0.0
- 0.0
x3954:
- 0.0
- 0.0
- 0.0
x3955:
- 0.0
- 0.0
- 0.0
x3956:
- 0.0
- 0.0
- 0.0
x3957:
- 0.0
- 0.0
- 0.0
x3958:
- 0.0
- 0.0
- 0.0
x3959:
- 0.0
- 0.0
- 0.0
x396:
- 0.0
- 0.0
- 0.0
x3960:
- 0.0
- 0.0
- 0.0
x3961:
- 0.0
- 0.0
- 0.0
x3962:
- 0.0
- 0.0
- 0.0
x3963:
- 0.0
- 0.0
- 0.0
x3964:
- 0.0
- 0.0
- 0.0
x3965:
- 0.0
- 0.0
- 0.0
x3966:
- 0.0
- 0.0
- 0.0
x3967:
- 0.0
- 0.0
- 0.0
x3968:
- 0.0
- 0.0
- 0.0
x3969:
- 0.0
- 0.0
- 0.0
x397:
- 0.0
- 0.0
- 0.0
x3970:
- 0.0
- 0.0
- 0.0
x3971:
- 0.0
- 0.0
- 0.0
x3972:
- 0.0
- 0.0
- 0.0
x3973:
- 0.0
- 0.0
- 0.0
x3974:
- 0.0
- 0.0
- 0.0
x3975:
- 0.0
- 0.0
- 0.0
x3976:
- 0.0
- 0.0
- 0.0
x3977:
- 0.0
- 0.0
- 0.0
x3978:
- 0.0
- 0.0
- 0.0
x3979:
- 0.0
- 0.0
- 0.0
x398:
- 0.0
- 0.0
- 0.0
x3980:
- 0.0
- 0.0
- 0.0
x3981:
- 0.0
- 0.0
- 0.0
x3982:
- 0.0
- 0.0
- 0.0
x3983:
- 0.0
- 0.0
- 0.0
x3984:
- 0.0
- 0.0
- 0.0
x3985:
- 0.0
- 0.0
- 0.0
x3986:
- 0.0
- 0.0
- 0.0
x3987:
- 0.0
- 0.0
- 0.0
x3988:
- 0.0
- 0.0
- 0.0
x3989:
- 0.0
- 0.0
- 0.0
x399:
- 0.0
- 0.0
- 0.0
x3990:
- 0.0
- 0.0
- 0.0
x3991:
- 0.0
- 0.0
- 0.0
x3992:
- 0.0
- 0.0
- 0.0
x3993:
- 0.0
- 0.0
- 0.0
x3994:
- 0.0
- 0.0
- 0.0
x3995:
- 0.0
- 0.0
- 0.0
x3996:
- 0.0
- 0.0
- 0.0
x3997:
- 0.0
- 0.0
- 0.0
x3998:
- 0.0
- 0.0
- 0.0
x3999:
- 0.0
- 0.0
- 0.0
x4:
- 0.0
- 0.0
- 0.0
x40:
- 0.0
- 0.0
- 0.0
x400:
- 0.0
- 0.0
- 0.0
x4000:
- 0.0
- 0.0
- 0.0
x4001:
- 0.0
- 0.0
- 0.0
x4002:
- 0.0
- 0.0
- 0.0
x4003:
- 0.0
- 0.0
- 0.0
x4004:
- 0.0
- 0.0
- 0.0
x4005:
- 0.0
- 0.0
- 0.0
x4006:
- 0.0
- 0.0
- 0.0
x4007:
- 0.0
- 0.0
- 0.0
x4008:
- 0.0
- 0.0
- 0.0
x4009:
- 0.0
- 0.0
- 0.0
x401:
- 0.0
- 0.0
- 0.0
x4010:
- 0.0
- 0.0
- 0.0
x4011:
- 0.0
- 0.0
- 0.0
x4012:
- 0.0
- 0.0
- 0.0
x4013:
- 0.0
- 0.0
- 0.0
x4014:
- 0.0
- 0.0
- 0.0
x4015:
- 0.0
- 0.0
- 0.0
x4016:
- 0.0
- 0.0
- 0.0
x4017:
- 0.0
- 0.0
- 0.0
x4018:
- 0.0
- 0.0
- 0.0
x4019:
- 0.0
- 0.0
- 0.0
x402:
- 0.0
- 0.0
- 0.0
x4020:
- 0.0
- 0.0
- 0.0
x4021:
- 0.0
- 0.0
- 0.0
x4022:
- 0.0
- 0.0
- 0.0
x4023:
- 0.0
- 0.0
- 0.0
x4024:
- 0.0
- 0.0
- 0.0
x4025:
- 0.0
- 0.0
- 0.0
x4026:
- 0.0
- 0.0
- 0.0
x4027:
- 0.0
- 0.0
- 0.0
x4028:
- 0.0
- 0.0
- 0.0
x4029:
- 0.0
- 0.0
- 0.0
x403:
- 0.0
- 0.0
- 0.0
x4030:
- 0.0
- 0.0
- 0.0
x4031:
- 0.0
- 0.0
- 0.0
x4032:
- 0.0
- 0.0
- 0.0
x4033:
- 0.0
- 0.0
- 0.0
x4034:
- 0.0
- 0.0
- 0.0
x4035:
- 0.0
- 0.0
- 0.0
x4036:
- 0.0
- 0.0
- 0.0
x4037:
- 0.0
- 0.0
- 0.0
x4038:
- 0.0
- 0.0
- 0.0
x4039:
- 0.0
- 0.0
- 0.0
x404:
- 0.0
- 0.0
- 0.0
x4040:
- 0.0
- 0.0
- 0.0
x4041:
- 0.0
- 0.0
- 0.0
x4042:
- 0.0
- 0.0
- 0.0
x4043:
- 0.0
- 0.0
- 0.0
x4044:
- 0.0
- 0.0
- 0.0
x4045:
- 0.0
- 0.0
- 0.0
x4046:
- 0.0
- 0.0
- 0.0
x4047:
- 0.0
- 0.0
- 0.0
x4048:
- 0.0
- 0.0
- 0.0
x4049:
- 0.0
- 0.0
- 0.0
x405:
- 0.0
- 0.0
- 0.0
x4050:
- 0.0
- 0.0
- 0.0
x4051:
- 0.0
- 0.0
- 0.0
x4052:
- 0.0
- 0.0
- 0.0
x4053:
- 0.0
- 0.0
- 0.0
x4054:
- 0.0
- 0.0
- 0.0
x4055:
- 0.0
- 0.0
- 0.0
x4056:
- 0.0
- 0.0
- 0.0
x4057:
- 0.0
- 0.0
- 0.0
x4058:
- 0.0
- 0.0
- 0.0
x4059:
- 0.0
- 0.0
- 0.0
x406:
- 0.0
- 0.0
- 0.0
x4060:
- 0.0
- 0.0
- 0.0
x4061:
- 0.0
- 0.0
- 0.0
x4062:
- 0.0
- 0.0
- 0.0
x4063:
- 0.0
- 0.0
- 0.0
x4064:
- 0.0
- 0.0
- 0.0
x4065:
- 0.0
- 0.0
- 0.0
x4066:
- 0.0
- 0.0
- 0.0
x4067:
- 0.0
- 0.0
- 0.0
x4068:
- 0.0
- 0.0
- 0.0
x4069:
- 0.0
- 0.0
- 0.0
x407:
- 0.0
- 0.0
- 0.0
x4070:
- 0.0
- 0.0
- 0.0
x4071:
- 0.0
- 0.0
- 0.0
x4072:
- 0.0
- 0.0
- 0.0
x4073:
- 0.0
- 0.0
- 0.0
x4074:
- 0.0
- 0.0
- 0.0
x4075:
- 0.0
- 0.0
- 0.0
x4076:
- 0.0
- 0.0
- 0.0
x4077:
- 0.0
- 0.0
- 0.0
x4078:
- 0.0
- 0.0
- 0.0
x4079:
- 0.0
- 0.0
- 0.0
x408:
- 0.0
- 0.0
- 0.0
x4080:
- 0.0
- 0.0
- 0.0
x4081:
- 0.0
- 0.0
- 0.0
x4082:
- 0.0
- 0.0
- 0.0
x4083:
- 0.0
- 0.0
- 0.0
x4084:
- 0.0
- 0.0
- 0.0
x4085:
- 0.0
- 0.0
- 0.0
x4086:
- 0.0
- 0.0
- 0.0
x4087:
- 0.0
- 0.0
- 0.0
x4088:
- 0.0
- 0.0
- 0.0
x4089:
- 0.0
- 0.0
- 0.0
x409:
- 0.0
- 0.0
- 0.0
x4090:
- 0.0
- 0.0
- 0.0
x4091:
- 0.0
- 0.0
- 0.0
x4092:
- 0.0
- 0.0
- 0.0
x4093:
- 0.0
- 0.0
- 0.0
x4094:
- 0.0
- 0.0
- 0.0
x4095:
- 0.0
- 0.0
- 0.0
x4096:
- 0.0
- 0.0
- 0.0
x4097:
- 0.0
- 0.0
- 0.0
x4098:
- 0.0
- 0.0
- 0.0
x4099:
- 0.0
- 0.0
- 0.0
x41:
- 0.0
- 0.0
- 0.0
x410:
- 0.0
- 0.0
- 0.0
x4100:
- 0.0
- 0.0
- 0.0
x4101:
- 0.0
- 0.0
- 0.0
x4102:
- 0.0
- 0.0
- 0.0
x4103:
- 0.0
- 0.0
- 0.0
x4104:
- 0.0
- 0.0
- 0.0
x4105:
- 0.0
- 0.0
- 0.0
x4106:
- 0.0
- 0.0
- 0.0
x4107:
- 0.0
- 0.0
- 0.0
x4108:
- 0.0
- 0.0
- 0.0
x4109:
- 0.0
- 0.0
- 0.0
x411:
- 0.0
- 0.0
- 0.0
x4110:
- 0.0
- 0.0
- 0.0
x4111:
- 0.0
- 0.0
- 0.0
x4112:
- 0.0
- 0.0
- 0.0
x4113:
- 0.0
- 0.0
- 0.0
x4114:
- 0.0
- 0.0
- 0.0
x4115:
- 0.0
- 0.0
- 0.0
x4116:
- 0.0
- 0.0
- 0.0
x4117:
- 0.0
- 0.0
- 0.0
x4118:
- 0.0
- 0.0
- 0.0
x4119:
- 0.0
- 0.0
- 0.0
x412:
- 0.0
- 0.0
- 0.0
x4120:
- 0.0
- 0.0
- 0.0
x4121:
- 0.0
- 0.0
- 0.0
x4122:
- 0.0
- 0.0
- 0.0
x4123:
- 0.0
- 0.0
- 0.0
x4124:
- 0.0
- 0.0
- 0.0
x4125:
- 0.0
- 0.0
- 0.0
x4126:
- 0.0
- 0.0
- 0.0
x4127:
- 0.0
- 0.0
- 0.0
x4128:
- 0.0
- 0.0
- 0.0
x4129:
- 0.0
- 0.0
- 0.0
x413:
- 0.0
- 0.0
- 0.0
x4130:
- 0.0
- 0.0
- 0.0
x4131:
- 0.0
- 0.0
- 0.0
x4132:
- 0.0
- 0.0
- 0.0
x4133:
- 0.0
- 0.0
- 0.0
x4134:
- 0.0
- 0.0
- 0.0
x4135:
- 0.0
- 0.0
- 0.0
x4136:
- 0.0
- 0.0
- 0.0
x4137:
- 0.0
- 0.0
- 0.0
x4138:
- 0.0
- 0.0
- 0.0
x4139:
- 0.0
- 0.0
- 0.0
x414:
- 0.0
- 0.0
- 0.0
x4140:
- 0.0
- 0.0
- 0.0
x4141:
- 0.0
- 0.0
- 0.0
x4142:
- 0.0
- 0.0
- 0.0
x4143:
- 0.0
- 0.0
- 0.0
x4144:
- 0.0
- 0.0
- 0.0
x4145:
- 0.0
- 0.0
- 0.0
x4146:
- 0.0
- 0.0
- 0.0
x4147:
- 0.0
- 0.0
- 0.0
x4148:
- 0.0
- 0.0
- 0.0
x4149:
- 0.0
- 0.0
- 0.0
x415:
- 0.0
- 0.0
- 0.0
x4150:
- 0.0
- 0.0
- 0.0
x4151:
- 0.0
- 0.0
- 0.0
x4152:
- 0.0
- 0.0
- 0.0
x4153:
- 0.0
- 0.0
- 0.0
x4154:
- 0.0
- 0.0
- 0.0
x4155:
- 0.0
- 0.0
- 0.0
x4156:
- 0.0
- 0.0
- 0.0
x4157:
- 0.0
- 0.0
- 0.0
x4158:
- 0.0
- 0.0
- 0.0
x4159:
- 0.0
- 0.0
- 0.0
x416:
- 0.0
- 0.0
- 0.0
x4160:
- 0.0
- 0.0
- 0.0
x4161:
- 0.0
- 0.0
- 0.0
x4162:
- 0.0
- 0.0
- 0.0
x4163:
- 0.0
- 0.0
- 0.0
x4164:
- 0.0
- 0.0
- 0.0
x4165:
- 0.0
- 0.0
- 0.0
x4166:
- 0.0
- 0.0
- 0.0
x4167:
- 0.0
- 0.0
- 0.0
x4168:
- 0.0
- 0.0
- 0.0
x4169:
- 0.0
- 0.0
- 0.0
x417:
- 0.0
- 0.0
- 0.0
x4170:
- 0.0
- 0.0
- 0.0
x4171:
- 0.0
- 0.0
- 0.0
x4172:
- 0.0
- 0.0
- 0.0
x4173:
- 0.0
- 0.0
- 0.0
x4174:
- 0.0
- 0.0
- 0.0
x4175:
- 0.0
- 0.0
- 0.0
x4176:
- 0.0
- 0.0
- 0.0
x4177:
- 0.0
- 0.0
- 0.0
x4178:
- 0.0
- 0.0
- 0.0
x4179:
- 0.0
- 0.0
- 0.0
x418:
- 0.0
- 0.0
- 0.0
x4180:
- 0.0
- 0.0
- 0.0
x4181:
- 0.0
- 0.0
- 0.0
x4182:
- 0.0
- 0.0
- 0.0
x4183:
- 0.0
- 0.0
- 0.0
x4184:
- 0.0
- 0.0
- 0.0
x4185:
- 0.0
- 0.0
- 0.0
x4186:
- 0.0
- 0.0
- 0.0
x4187:
- 0.0
- 0.0
- 0.0
x4188:
- 0.0
- 0.0
- 0.0
x4189:
- 0.0
- 0.0
- 0.0
x419:
- 0.0
- 0.0
- 0.0
x4190:
- 0.0
- 0.0
- 0.0
x4191:
- 0.0
- 0.0
- 0.0
x4192:
- 0.0
- 0.0
- 0.0
x4193:
- 0.0
- 0.0
- 0.0
x4194:
- 0.0
- 0.0
- 0.0
x4195:
- 0.0
- 0.0
- 0.0
x4196:
- 0.0
- 0.0
- 0.0
x4197:
- 0.0
- 0.0
- 0.0
x4198:
- 0.0
- 0.0
- 0.0
x4199:
- 0.0
- 0.0
- 0.0
x42:
- 0.0
- 0.0
- 0.0
x420:
- 0.0
- 0.0
- 0.0
x4200:
- 0.0
- 0.0
- 0.0
x4201:
- 0.0
- 0.0
- 0.0
x4202:
- 0.0
- 0.0
- 0.0
x4203:
- 0.0
- 0.0
- 0.0
x4204:
- 0.0
- 0.0
- 0.0
x4205:
- 0.0
- 0.0
- 0.0
x4206:
- 0.0
- 0.0
- 0.0
x4207:
- 0.0
- 0.0
- 0.0
x4208:
- 0.0
- 0.0
- 0.0
x4209:
- 0.0
- 0.0
- 0.0
x421:
- 0.0
- 0.0
- 0.0
x4210:
- 0.0
- 0.0
- 0.0
x4211:
- 0.0
- 0.0
- 0.0
x4212:
- 0.0
- 0.0
- 0.0
x4213:
- 0.0
- 0.0
- 0.0
x4214:
- 0.0
- 0.0
- 0.0
x4215:
- 0.0
- 0.0
- 0.0
x4216:
- 0.0
- 0.0
- 0.0
x4217:
- 0.0
- 0.0
- 0.0
x4218:
- 0.0
- 0.0
- 0.0
x4219:
- 0.0
- 0.0
- 0.0
x422:
- 0.0
- 0.0
- 0.0
x4220:
- 0.0
- 0.0
- 0.0
x4221:
- 0.0
- 0.0
- 0.0
x4222:
- 0.0
- 0.0
- 0.0
x4223:
- 0.0
- 0.0
- 0.0
x4224:
- 0.0
- 0.0
- 0.0
x4225:
- 0.0
- 0.0
- 0.0
x4226:
- 0.0
- 0.0
- 0.0
x4227:
- 0.0
- 0.0
- 0.0
x4228:
- 0.0
- 0.0
- 0.0
x4229:
- 0.0
- 0.0
- 0.0
x423:
- 0.0
- 0.0
- 0.0
x4230:
- 0.0
- 0.0
- 0.0
x4231:
- 0.0
- 0.0
- 0.0
x4232:
- 0.0
- 0.0
- 0.0
x4233:
- 0.0
- 0.0
- 0.0
x4234:
- 0.0
- 0.0
- 0.0
x4235:
- 0.0
- 0.0
- 0.0
x4236:
- 0.0
- 0.0
- 0.0
x4237:
- 0.0
- 0.0
- 0.0
x4238:
- 0.0
- 0.0
- 0.0
x4239:
- 0.0
- 0.0
- 0.0
x424:
- 0.0
- 0.0
- 0.0
x4240:
- 0.0
- 0.0
- 0.0
x4241:
- 0.0
- 0.0
- 0.0
x4242:
- 0.0
- 0.0
- 0.0
x4243:
- 0.0
- 0.0
- 0.0
x4244:
- 0.0
- 0.0
- 0.0
x4245:
- 0.0
- 0.0
- 0.0
x4246:
- 0.0
- 0.0
- 0.0
x4247:
- 0.0
- 0.0
- 0.0
x4248:
- 0.0
- 0.0
- 0.0
x4249:
- 0.0
- 0.0
- 0.0
x425:
- 0.0
- 0.0
- 0.0
x4250:
- 0.0
- 0.0
- 0.0
x4251:
- 0.0
- 0.0
- 0.0
x4252:
- 0.0
- 0.0
- 0.0
x4253:
- 0.0
- 0.0
- 0.0
x4254:
- 0.0
- 0.0
- 0.0
x4255:
- 0.0
- 0.0
- 0.0
x4256:
- 0.0
- 0.0
- 0.0
x4257:
- 0.0
- 0.0
- 0.0
x4258:
- 0.0
- 0.0
- 0.0
x4259:
- 0.0
- 0.0
- 0.0
x426:
- 0.0
- 0.0
- 0.0
x4260:
- 0.0
- 0.0
- 0.0
x4261:
- 0.0
- 0.0
- 0.0
x4262:
- 0.0
- 0.0
- 0.0
x4263:
- 0.0
- 0.0
- 0.0
x4264:
- 0.0
- 0.0
- 0.0
x4265:
- 0.0
- 0.0
- 0.0
x4266:
- 0.0
- 0.0
- 0.0
x4267:
- 0.0
- 0.0
- 0.0
x4268:
- 0.0
- 0.0
- 0.0
x4269:
- 0.0
- 0.0
- 0.0
x427:
- 0.0
- 0.0
- 0.0
x4270:
- 0.0
- 0.0
- 0.0
x4271:
- 0.0
- 0.0
- 0.0
x4272:
- 0.0
- 0.0
- 0.0
x4273:
- 0.0
- 0.0
- 0.0
x4274:
- 0.0
- 0.0
- 0.0
x4275:
- 0.0
- 0.0
- 0.0
x4276:
- 0.0
- 0.0
- 0.0
x4277:
- 0.0
- 0.0
- 0.0
x4278:
- 0.0
- 0.0
- 0.0
x4279:
- 0.0
- 0.0
- 0.0
x428:
- 0.0
- 0.0
- 0.0
x4280:
- 0.0
- 0.0
- 0.0
x4281:
- 0.0
- 0.0
- 0.0
x4282:
- 0.0
- 0.0
- 0.0
x4283:
- 0.0
- 0.0
- 0.0
x4284:
- 0.0
- 0.0
- 0.0
x4285:
- 0.0
- 0.0
- 0.0
x4286:
- 0.0
- 0.0
- 0.0
x4287:
- 0.0
- 0.0
- 0.0
x4288:
- 0.0
- 0.0
- 0.0
x4289:
- 0.0
- 0.0
- 0.0
x429:
- 0.0
- 0.0
- 0.0
x4290:
- 0.0
- 0.0
- 0.0
x4291:
- 0.0
- 0.0
- 0.0
x4292:
- 0.0
- 0.0
- 0.0
x4293:
- 0.0
- 0.0
- 0.0
x4294:
- 0.0
- 0.0
- 0.0
x4295:
- 0.0
- 0.0
- 0.0
x4296:
- 0.0
- 0.0
- 0.0
x4297:
- 0.0
- 0.0
- 0.0
x4298:
- 0.0
- 0.0
- 0.0
x4299:
- 0.0
- 0.0
- 0.0
x43:
- 0.0
- 0.0
- 0.0
x430:
- 0.0
- 0.0
- 0.0
x4300:
- 0.0
- 0.0
- 0.0
x4301:
- 0.0
- 0.0
- 0.0
x4302:
- 0.0
- 0.0
- 0.0
x4303:
- 0.0
- 0.0
- 0.0
x4304:
- 0.0
- 0.0
- 0.0
x4305:
- 0.0
- 0.0
- 0.0
x4306:
- 0.0
- 0.0
- 0.0
x4307:
- 0.0
- 0.0
- 0.0
x4308:
- 0.0
- 0.0
- 0.0
x4309:
- 0.0
- 0.0
- 0.0
x431:
- 0.0
- 0.0
- 0.0
x4310:
- 0.0
- 0.0
- 0.0
x4311:
- 0.0
- 0.0
- 0.0
x4312:
- 0.0
- 0.0
- 0.0
x4313:
- 0.0
- 0.0
- 0.0
x4314:
- 0.0
- 0.0
- 0.0
x4315:
- 0.0
- 0.0
- 0.0
x4316:
- 0.0
- 0.0
- 0.0
x4317:
- 0.0
- 0.0
- 0.0
x4318:
- 0.0
- 0.0
- 0.0
x4319:
- 0.0
- 0.0
- 0.0
x432:
- 0.0
- 0.0
- 0.0
x4320:
- 0.0
- 0.0
- 0.0
x4321:
- 0.0
- 0.0
- 0.0
x4322:
- 0.0
- 0.0
- 0.0
x4323:
- 0.0
- 0.0
- 0.0
x4324:
- 0.0
- 0.0
- 0.0
x4325:
- 0.0
- 0.0
- 0.0
x4326:
- 0.0
- 0.0
- 0.0
x4327:
- 0.0
- 0.0
- 0.0
x4328:
- 0.0
- 0.0
- 0.0
x4329:
- 0.0
- 0.0
- 0.0
x433:
- 0.0
- 0.0
- 0.0
x4330:
- 0.0
- 0.0
- 0.0
x4331:
- 0.0
- 0.0
- 0.0
x4332:
- 0.0
- 0.0
- 0.0
x4333:
- 0.0
- 0.0
- 0.0
x4334:
- 0.0
- 0.0
- 0.0
x4335:
- 0.0
- 0.0
- 0.0
x4336:
- 0.0
- 0.0
- 0.0
x4337:
- 0.0
- 0.0
- 0.0
x4338:
- 0.0
- 0.0
- 0.0
x4339:
- 0.0
- 0.0
- 0.0
x434:
- 0.0
- 0.0
- 0.0
x4340:
- 0.0
- 0.0
- 0.0
x4341:
- 0.0
- 0.0
- 0.0
x4342:
- 0.0
- 0.0
- 0.0
x4343:
- 0.0
- 0.0
- 0.0
x4344:
- 0.0
- 0.0
- 0.0
x4345:
- 0.0
- 0.0
- 0.0
x4346:
- 0.0
- 0.0
- 0.0
x4347:
- 0.0
- 0.0
- 0.0
x4348:
- 0.0
- 0.0
- 0.0
x4349:
- 0.0
- 0.0
- 0.0
x435:
- 0.0
- 0.0
- 0.0
x4350:
- 0.0
- 0.0
- 0.0
x4351:
- 0.0
- 0.0
- 0.0
x4352:
- 0.0
- 0.0
- 0.0
x4353:
- 0.0
- 0.0
- 0.0
x4354:
- 0.0
- 0.0
- 0.0
x4355:
- 0.0
- 0.0
- 0.0
x4356:
- 0.0
- 0.0
- 0.0
x4357:
- 0.0
- 0.0
- 0.0
x4358:
- 0.0
- 0.0
- 0.0
x4359:
- 0.0
- 0.0
- 0.0
x436:
- 0.0
- 0.0
- 0.0
x4360:
- 0.0
- 0.0
- 0.0
x4361:
- 0.0
- 0.0
- 0.0
x4362:
- 0.0
- 0.0
- 0.0
x4363:
- 0.0
- 0.0
- 0.0
x4364:
- 0.0
- 0.0
- 0.0
x4365:
- 0.0
- 0.0
- 0.0
x4366:
- 0.0
- 0.0
- 0.0
x4367:
- 0.0
- 0.0
- 0.0
x4368:
- 0.0
- 0.0
- 0.0
x4369:
- 0.0
- 0.0
- 0.0
x437:
- 0.0
- 0.0
- 0.0
x4370:
- 0.0
- 0.0
- 0.0
x4371:
- 0.0
- 0.0
- 0.0
x4372:
- 0.0
- 0.0
- 0.0
x4373:
- 0.0
- 0.0
- 0.0
x4374:
- 0.0
- 0.0
- 0.0
x4375:
- 0.0
- 0.0
- 0.0
x4376:
- 0.0
- 0.0
- 0.0
x4377:
- 0.0
- 0.0
- 0.0
x4378:
- 0.0
- 0.0
- 0.0
x4379:
- 0.0
- 0.0
- 0.0
x438:
- 0.0
- 0.0
- 0.0
x4380:
- 0.0
- 0.0
- 0.0
x4381:
- 0.0
- 0.0
- 0.0
x4382:
- 0.0
- 0.0
- 0.0
x4383:
- 0.0
- 0.0
- 0.0
x4384:
- 0.0
- 0.0
- 0.0
x4385:
- 0.0
- 0.0
- 0.0
x4386:
- 0.0
- 0.0
- 0.0
x4387:
- 0.0
- 0.0
- 0.0
x4388:
- 0.0
- 0.0
- 0.0
x4389:
- 0.0
- 0.0
- 0.0
x439:
- 0.0
- 0.0
- 0.0
x4390:
- 0.0
- 0.0
- 0.0
x4391:
- 0.0
- 0.0
- 0.0
x4392:
- 0.0
- 0.0
- 0.0
x4393:
- 0.0
- 0.0
- 0.0
x4394:
- 0.0
- 0.0
- 0.0
x4395:
- 0.0
- 0.0
- 0.0
x4396:
- 0.0
- 0.0
- 0.0
x4397:
- 0.0
- 0.0
- 0.0
x4398:
- 0.0
- 0.0
- 0.0
x4399:
- 0.0
- 0.0
- 0.0
x44:
- 0.0
- 0.0
- 0.0
x440:
- 0.0
- 0.0
- 0.0
x4400:
- 0.0
- 0.0
- 0.0
x4401:
- 0.0
- 0.0
- 0.0
x4402:
- 0.0
- 0.0
- 0.0
x4403:
- 0.0
- 0.0
- 0.0
x4404:
- 0.0
- 0.0
- 0.0
x4405:
- 0.0
- 0.0
- 0.0
x4406:
- 0.0
- 0.0
- 0.0
x4407:
- 0.0
- 0.0
- 0.0
x4408:
- 0.0
- 0.0
- 0.0
x4409:
- 0.0
- 0.0
- 0.0
x441:
- 0.0
- 0.0
- 0.0
x4410:
- 0.0
- 0.0
- 0.0
x4411:
- 0.0
- 0.0
- 0.0
x4412:
- 0.0
- 0.0
- 0.0
x4413:
- 0.0
- 0.0
- 0.0
x4414:
- 0.0
- 0.0
- 0.0
x4415:
- 0.0
- 0.0
- 0.0
x4416:
- 0.0
- 0.0
- 0.0
x4417:
- 0.0
- 0.0
- 0.0
x4418:
- 0.0
- 0.0
- 0.0
x4419:
- 0.0
- 0.0
- 0.0
x442:
- 0.0
- 0.0
- 0.0
x4420:
- 0.0
- 0.0
- 0.0
x4421:
- 0.0
- 0.0
- 0.0
x4422:
- 0.0
- 0.0
- 0.0
x4423:
- 0.0
- 0.0
- 0.0
x4424:
- 0.0
- 0.0
- 0.0
x4425:
- 0.0
- 0.0
- 0.0
x4426:
- 0.0
- 0.0
- 0.0
x4427:
- 0.0
- 0.0
- 0.0
x4428:
- 0.0
- 0.0
- 0.0
x4429:
- 0.0
- 0.0
- 0.0
x443:
- 0.0
- 0.0
- 0.0
x4430:
- 0.0
- 0.0
- 0.0
x4431:
- 0.0
- 0.0
- 0.0
x4432:
- 0.0
- 0.0
- 0.0
x4433:
- 0.0
- 0.0
- 0.0
x4434:
- 0.0
- 0.0
- 0.0
x4435:
- 0.0
- 0.0
- 0.0
x4436:
- 0.0
- 0.0
- 0.0
x4437:
- 0.0
- 0.0
- 0.0
x4438:
- 0.0
- 0.0
- 0.0
x4439:
- 0.0
- 0.0
- 0.0
x444:
- 0.0
- 0.0
- 0.0
x4440:
- 0.0
- 0.0
- 0.0
x4441:
- 0.0
- 0.0
- 0.0
x4442:
- 0.0
- 0.0
- 0.0
x4443:
- 0.0
- 0.0
- 0.0
x4444:
- 0.0
- 0.0
- 0.0
x4445:
- 0.0
- 0.0
- 0.0
x4446:
- 0.0
- 0.0
- 0.0
x4447:
- 0.0
- 0.0
- 0.0
x4448:
- 0.0
- 0.0
- 0.0
x4449:
- 0.0
- 0.0
- 0.0
x445:
- 0.0
- 0.0
- 0.0
x4450:
- 0.0
- 0.0
- 0.0
x4451:
- 0.0
- 0.0
- 0.0
x4452:
- 0.0
- 0.0
- 0.0
x4453:
- 0.0
- 0.0
- 0.0
x4454:
- 0.0
- 0.0
- 0.0
x4455:
- 0.0
- 0.0
- 0.0
x4456:
- 0.0
- 0.0
- 0.0
x4457:
- 0.0
- 0.0
- 0.0
x4458:
- 0.0
- 0.0
- 0.0
x4459:
- 0.0
- 0.0
- 0.0
x446:
- 0.0
- 0.0
- 0.0
x4460:
- 0.0
- 0.0
- 0.0
x4461:
- 0.0
- 0.0
- 0.0
x4462:
- 0.0
- 0.0
- 0.0
x4463:
- 0.0
- 0.0
- 0.0
x4464:
- 0.0
- 0.0
- 0.0
x4465:
- 0.0
- 0.0
- 0.0
x4466:
- 0.0
- 0.0
- 0.0
x4467:
- 0.0
- 0.0
- 0.0
x4468:
- 0.0
- 0.0
- 0.0
x4469:
- 0.0
- 0.0
- 0.0
x447:
- 0.0
- 0.0
- 0.0
x4470:
- 0.0
- 0.0
- 0.0
x4471:
- 0.0
- 0.0
- 0.0
x4472:
- 0.0
- 0.0
- 0.0
x4473:
- 0.0
- 0.0
- 0.0
x4474:
- 0.0
- 0.0
- 0.0
x4475:
- 0.0
- 0.0
- 0.0
x4476:
- 0.0
- 0.0
- 0.0
x4477:
- 0.0
- 0.0
- 0.0
x4478:
- 0.0
- 0.0
- 0.0
x4479:
- 0.0
- 0.0
- 0.0
x448:
- 0.0
- 0.0
- 0.0
x4480:
- 0.0
- 0.0
- 0.0
x4481:
- 0.0
- 0.0
- 0.0
x4482:
- 0.0
- 0.0
- 0.0
x4483:
- 0.0
- 0.0
- 0.0
x4484:
- 0.0
- 0.0
- 0.0
x4485:
- 0.0
- 0.0
- 0.0
x4486:
- 0.0
- 0.0
- 0.0
x4487:
- 0.0
- 0.0
- 0.0
x4488:
- 0.0
- 0.0
- 0.0
x4489:
- 0.0
- 0.0
- 0.0
x449:
- 0.0
- 0.0
- 0.0
x4490:
- 0.0
- 0.0
- 0.0
x4491:
- 0.0
- 0.0
- 0.0
x4492:
- 0.0
- 0.0
- 0.0
x4493:
- 0.0
- 0.0
- 0.0
x4494:
- 0.0
- 0.0
- 0.0
x4495:
- 0.0
- 0.0
- 0.0
x4496:
- 0.0
- 0.0
- 0.0
x4497:
- 0.0
- 0.0
- 0.0
x4498:
- 0.0
- 0.0
- 0.0
x4499:
- 0.0
- 0.0
- 0.0
x45:
- 0.0
- 1.0
- 0.0
x450:
- 0.0
- 0.0
- 0.0
x4500:
- 0.0
- 0.0
- 0.0
x4501:
- 0.0
- 0.0
- 0.0
x4502:
- 0.0
- 0.0
- 0.0
x4503:
- 0.0
- 0.0
- 0.0
x4504:
- 0.0
- 0.0
- 0.0
x4505:
- 0.0
- 0.0
- 0.0
x4506:
- 0.0
- 0.0
- 0.0
x4507:
- 0.0
- 0.0
- 0.0
x4508:
- 0.0
- 0.0
- 0.0
x4509:
- 0.0
- 0.0
- 0.0
x451:
- 0.0
- 0.0
- 0.0
x4510:
- 0.0
- 0.0
- 0.0
x4511:
- 0.0
- 0.0
- 0.0
x4512:
- 0.0
- 0.0
- 0.0
x4513:
- 0.0
- 0.0
- 0.0
x4514:
- 0.0
- 0.0
- 0.0
x4515:
- 0.0
- 0.0
- 0.0
x4516:
- 0.0
- 0.0
- 0.0
x4517:
- 0.0
- 0.0
- 0.0
x4518:
- 0.0
- 0.0
- 0.0
x4519:
- 0.0
- 0.0
- 0.0
x452:
- 0.0
- 0.0
- 0.0
x4520:
- 0.0
- 0.0
- 0.0
x4521:
- 0.0
- 0.0
- 0.0
x4522:
- 0.0
- 0.0
- 0.0
x4523:
- 0.0
- 0.0
- 0.0
x4524:
- 0.0
- 0.0
- 0.0
x4525:
- 0.0
- 0.0
- 0.0
x4526:
- 0.0
- 0.0
- 0.0
x4527:
- 0.0
- 0.0
- 0.0
x4528:
- 0.0
- 0.0
- 0.0
x4529:
- 0.0
- 0.0
- 0.0
x453:
- 0.0
- 0.0
- 0.0
x4530:
- 0.0
- 0.0
- 0.0
x4531:
- 0.0
- 0.0
- 0.0
x4532:
- 0.0
- 0.0
- 0.0
x4533:
- 0.0
- 0.0
- 0.0
x4534:
- 0.0
- 0.0
- 0.0
x4535:
- 0.0
- 0.0
- 0.0
x4536:
- 0.0
- 0.0
- 0.0
x4537:
- 0.0
- 0.0
- 0.0
x4538:
- 0.0
- 0.0
- 0.0
x4539:
- 0.0
- 0.0
- 0.0
x454:
- 0.0
- 0.0
- 0.0
x4540:
- 0.0
- 0.0
- 0.0
x4541:
- 0.0
- 0.0
- 0.0
x4542:
- 0.0
- 0.0
- 0.0
x4543:
- 0.0
- 0.0
- 0.0
x4544:
- 0.0
- 0.0
- 0.0
x4545:
- 0.0
- 0.0
- 0.0
x4546:
- 0.0
- 0.0
- 0.0
x4547:
- 0.0
- 0.0
- 0.0
x4548:
- 0.0
- 0.0
- 0.0
x4549:
- 0.0
- 0.0
- 0.0
x455:
- 0.0
- 0.0
- 0.0
x4550:
- 0.0
- 0.0
- 0.0
x4551:
- 0.0
- 0.0
- 0.0
x4552:
- 0.0
- 0.0
- 0.0
x4553:
- 0.0
- 0.0
- 0.0
x4554:
- 0.0
- 0.0
- 0.0
x4555:
- 0.0
- 0.0
- 0.0
x4556:
- 0.0
- 0.0
- 0.0
x4557:
- 0.0
- 0.0
- 0.0
x4558:
- 0.0
- 0.0
- 0.0
x4559:
- 0.0
- 0.0
- 0.0
x456:
- 0.0
- 0.0
- 0.0
x4560:
- 0.0
- 0.0
- 0.0
x4561:
- 0.0
- 0.0
- 0.0
x4562:
- 0.0
- 0.0
- 0.0
x4563:
- 0.0
- 0.0
- 0.0
x4564:
- 0.0
- 0.0
- 0.0
x4565:
- 0.0
- 0.0
- 0.0
x4566:
- 0.0
- 0.0
- 0.0
x4567:
- 0.0
- 0.0
- 0.0
x4568:
- 0.0
- 0.0
- 0.0
x4569:
- 0.0
- 0.0
- 0.0
x457:
- 0.0
- 0.0
- 0.0
x4570:
- 0.0
- 0.0
- 0.0
x4571:
- 0.0
- 0.0
- 0.0
x4572:
- 0.0
- 0.0
- 0.0
x4573:
- 0.0
- 0.0
- 0.0
x4574:
- 0.0
- 0.0
- 0.0
x4575:
- 0.0
- 0.0
- 0.0
x4576:
- 0.0
- 0.0
- 0.0
x4577:
- 0.0
- 0.0
- 0.0
x4578:
- 0.0
- 0.0
- 0.0
x4579:
- 0.0
- 0.0
- 0.0
x458:
- 0.0
- 0.0
- 0.0
x4580:
- 0.0
- 0.0
- 0.0
x4581:
- 0.0
- 0.0
- 0.0
x4582:
- 0.0
- 0.0
- 0.0
x4583:
- 0.0
- 0.0
- 0.0
x4584:
- 0.0
- 0.0
- 0.0
x4585:
- 0.0
- 0.0
- 0.0
x4586:
- 0.0
- 0.0
- 0.0
x4587:
- 0.0
- 0.0
- 0.0
x4588:
- 0.0
- 0.0
- 0.0
x4589:
- 0.0
- 0.0
- 0.0
x459:
- 0.0
- 0.0
- 0.0
x4590:
- 0.0
- 0.0
- 0.0
x4591:
- 0.0
- 0.0
- 0.0
x4592:
- 0.0
- 0.0
- 0.0
x4593:
- 0.0
- 0.0
- 0.0
x4594:
- 0.0
- 0.0
- 0.0
x4595:
- 0.0
- 0.0
- 0.0
x4596:
- 0.0
- 0.0
- 0.0
x4597:
- 0.0
- 0.0
- 0.0
x4598:
- 0.0
- 0.0
- 0.0
x4599:
- 0.0
- 0.0
- 0.0
x46:
- 0.0
- 0.0
- 0.0
x460:
- 0.0
- 0.0
- 0.0
x4600:
- 0.0
- 0.0
- 0.0
x4601:
- 0.0
- 0.0
- 0.0
x4602:
- 0.0
- 0.0
- 0.0
x4603:
- 0.0
- 0.0
- 0.0
x4604:
- 0.0
- 0.0
- 0.0
x4605:
- 0.0
- 0.0
- 0.0
x4606:
- 0.0
- 0.0
- 0.0
x4607:
- 0.0
- 0.0
- 0.0
x4608:
- 0.0
- 0.0
- 0.0
x4609:
- 0.0
- 0.0
- 0.0
x461:
- 0.0
- 0.0
- 0.0
x4610:
- 0.0
- 0.0
- 0.0
x4611:
- 0.0
- 0.0
- 0.0
x4612:
- 0.0
- 0.0
- 0.0
x4613:
- 0.0
- 0.0
- 0.0
x4614:
- 0.0
- 0.0
- 0.0
x4615:
- 0.0
- 0.0
- 0.0
x4616:
- 0.0
- 0.0
- 0.0
x4617:
- 0.0
- 0.0
- 0.0
x4618:
- 0.0
- 0.0
- 0.0
x4619:
- 0.0
- 0.0
- 0.0
x462:
- 0.0
- 0.0
- 0.0
x4620:
- 0.0
- 0.0
- 0.0
x4621:
- 0.0
- 0.0
- 0.0
x4622:
- 0.0
- 0.0
- 0.0
x4623:
- 0.0
- 0.0
- 0.0
x4624:
- 0.0
- 0.0
- 1.0
x4625:
- 0.0
- 0.0
- 0.0
x4626:
- 0.0
- 0.0
- 0.0
x4627:
- 0.0
- 0.0
- 0.0
x4628:
- 0.0
- 0.0
- 0.0
x4629:
- 0.0
- 0.0
- 0.0
x463:
- 0.0
- 0.0
- 0.0
x4630:
- 0.0
- 0.0
- 0.0
x4631:
- 0.0
- 0.0
- 0.0
x4632:
- 0.0
- 0.0
- 0.0
x4633:
- 0.0
- 0.0
- 0.0
x4634:
- 0.0
- 0.0
- 0.0
x4635:
- 0.0
- 0.0
- 0.0
x4636:
- 0.0
- 0.0
- 0.0
x4637:
- 0.0
- 0.0
- 0.0
x4638:
- 0.0
- 0.0
- 0.0
x4639:
- 0.0
- 0.0
- 0.0
x464:
- 0.0
- 0.0
- 0.0
x4640:
- 0.0
- 0.0
- 0.0
x4641:
- 0.0
- 0.0
- 0.0
x4642:
- 0.0
- 0.0
- 0.0
x4643:
- 0.0
- 0.0
- 0.0
x4644:
- 0.0
- 0.0
- 0.0
x4645:
- 0.0
- 0.0
- 0.0
x4646:
- 0.0
- 0.0
- 0.0
x4647:
- 0.0
- 0.0
- 0.0
x4648:
- 0.0
- 0.0
- 0.0
x4649:
- 0.0
- 0.0
- 0.0
x465:
- 0.0
- 0.0
- 0.0
x4650:
- 0.0
- 0.0
- 0.0
x4651:
- 0.0
- 0.0
- 0.0
x4652:
- 0.0
- 0.0
- 0.0
x4653:
- 0.0
- 0.0
- 0.0
x4654:
- 0.0
- 0.0
- 0.0
x4655:
- 0.0
- 0.0
- 0.0
x4656:
- 0.0
- 0.0
- 0.0
x4657:
- 0.0
- 0.0
- 0.0
x4658:
- 0.0
- 0.0
- 0.0
x4659:
- 0.0
- 0.0
- 0.0
x466:
- 0.0
- 0.0
- 0.0
x4660:
- 0.0
- 0.0
- 0.0
x4661:
- 0.0
- 0.0
- 0.0
x4662:
- 0.0
- 0.0
- 0.0
x4663:
- 0.0
- 0.0
- 0.0
x4664:
- 0.0
- 0.0
- 0.0
x4665:
- 0.0
- 0.0
- 0.0
x4666:
- 0.0
- 0.0
- 0.0
x4667:
- 0.0
- 0.0
- 0.0
x4668:
- 0.0
- 0.0
- 0.0
x4669:
- 0.0
- 0.0
- 0.0
x467:
- 0.0
- 0.0
- 0.0
x4670:
- 0.0
- 0.0
- 0.0
x4671:
- 0.0
- 0.0
- 0.0
x4672:
- 0.0
- 0.0
- 0.0
x4673:
- 0.0
- 0.0
- 0.0
x4674:
- 0.0
- 0.0
- 0.0
x4675:
- 0.0
- 0.0
- 0.0
x4676:
- 0.0
- 0.0
- 0.0
x4677:
- 0.0
- 0.0
- 0.0
x4678:
- 0.0
- 0.0
- 0.0
x4679:
- 0.0
- 0.0
- 0.0
x468:
- 0.0
- 0.0
- 0.0
x4680:
- 0.0
- 0.0
- 0.0
x4681:
- 0.0
- 0.0
- 0.0
x4682:
- 0.0
- 0.0
- 0.0
x4683:
- 0.0
- 0.0
- 0.0
x4684:
- 0.0
- 0.0
- 0.0
x4685:
- 0.0
- 0.0
- 0.0
x4686:
- 0.0
- 0.0
- 0.0
x4687:
- 0.0
- 0.0
- 0.0
x4688:
- 0.0
- 0.0
- 0.0
x4689:
- 0.0
- 0.0
- 0.0
x469:
- 0.0
- 0.0
- 0.0
x4690:
- 0.0
- 0.0
- 0.0
x4691:
- 0.0
- 0.0
- 0.0
x4692:
- 0.0
- 0.0
- 0.0
x4693:
- 0.0
- 0.0
- 0.0
x4694:
- 0.0
- 0.0
- 0.0
x4695:
- 0.0
- 0.0
- 0.0
x4696:
- 0.0
- 0.0
- 0.0
x4697:
- 0.0
- 0.0
- 0.0
x4698:
- 0.0
- 0.0
- 0.0
x4699:
- 0.0
- 0.0
- 0.0
x47:
- 0.0
- 0.0
- 0.0
x470:
- 0.0
- 0.0
- 0.0
x4700:
- 0.0
- 0.0
- 0.0
x4701:
- 0.0
- 0.0
- 0.0
x4702:
- 0.0
- 0.0
- 0.0
x4703:
- 0.0
- 0.0
- 0.0
x4704:
- 0.0
- 0.0
- 0.0
x4705:
- 0.0
- 0.0
- 0.0
x4706:
- 0.0
- 0.0
- 0.0
x4707:
- 0.0
- 0.0
- 0.0
x4708:
- 0.0
- 0.0
- 0.0
x4709:
- 0.0
- 0.0
- 0.0
x471:
- 0.0
- 0.0
- 0.0
x4710:
- 0.0
- 0.0
- 0.0
x4711:
- 0.0
- 0.0
- 0.0
x4712:
- 0.0
- 0.0
- 0.0
x4713:
- 0.0
- 0.0
- 0.0
x4714:
- 0.0
- 0.0
- 0.0
x4715:
- 0.0
- 0.0
- 0.0
x4716:
- 0.0
- 0.0
- 0.0
x4717:
- 0.0
- 0.0
- 0.0
x4718:
- 0.0
- 0.0
- 0.0
x4719:
- 0.0
- 0.0
- 0.0
x472:
- 0.0
- 0.0
- 0.0
x4720:
- 0.0
- 0.0
- 0.0
x4721:
- 0.0
- 0.0
- 0.0
x4722:
- 0.0
- 0.0
- 0.0
x4723:
- 0.0
- 0.0
- 0.0
x4724:
- 0.0
- 0.0
- 0.0
x4725:
- 0.0
- 0.0
- 0.0
x4726:
- 0.0
- 0.0
- 0.0
x4727:
- 0.0
- 0.0
- 0.0
x4728:
- 0.0
- 0.0
- 0.0
x4729:
- 0.0
- 0.0
- 0.0
x473:
- 0.0
- 0.0
- 0.0
x4730:
- 0.0
- 0.0
- 0.0
x4731:
- 0.0
- 0.0
- 0.0
x4732:
- 0.0
- 0.0
- 0.0
x4733:
- 0.0
- 0.0
- 0.0
x4734:
- 0.0
- 0.0
- 0.0
x4735:
- 0.0
- 0.0
- 0.0
x4736:
- 0.0
- 0.0
- 0.0
x4737:
- 0.0
- 0.0
- 0.0
x4738:
- 0.0
- 0.0
- 0.0
x4739:
- 0.0
- 0.0
- 0.0
x474:
- 0.0
- 0.0
- 0.0
x4740:
- 0.0
- 0.0
- 0.0
x4741:
- 0.0
- 0.0
- 0.0
x4742:
- 0.0
- 0.0
- 0.0
x4743:
- 0.0
- 0.0
- 0.0
x4744:
- 0.0
- 0.0
- 0.0
x4745:
- 0.0
- 0.0
- 0.0
x4746:
- 0.0
- 0.0
- 0.0
x4747:
- 0.0
- 0.0
- 0.0
x4748:
- 0.0
- 0.0
- 0.0
x4749:
- 0.0
- 0.0
- 0.0
x475:
- 0.0
- 0.0
- 0.0
x4750:
- 0.0
- 0.0
- 0.0
x4751:
- 0.0
- 0.0
- 0.0
x4752:
- 0.0
- 0.0
- 0.0
x4753:
- 0.0
- 0.0
- 0.0
x4754:
- 0.0
- 0.0
- 0.0
x4755:
- 0.0
- 0.0
- 0.0
x4756:
- 0.0
- 0.0
- 0.0
x4757:
- 0.0
- 0.0
- 0.0
x4758:
- 0.0
- 0.0
- 0.0
x4759:
- 0.0
- 0.0
- 0.0
x476:
- 0.0
- 0.0
- 0.0
x4760:
- 0.0
- 0.0
- 0.0
x4761:
- 0.0
- 0.0
- 0.0
x4762:
- 0.0
- 0.0
- 0.0
x4763:
- 0.0
- 0.0
- 0.0
x4764:
- 0.0
- 0.0
- 0.0
x4765:
- 0.0
- 0.0
- 0.0
x4766:
- 0.0
- 0.0
- 0.0
x4767:
- 0.0
- 0.0
- 0.0
x4768:
- 0.0
- 0.0
- 0.0
x4769:
- 0.0
- 0.0
- 0.0
x477:
- 0.0
- 0.0
- 0.0
x4770:
- 0.0
- 0.0
- 0.0
x4771:
- 0.0
- 0.0
- 0.0
x4772:
- 0.0
- 0.0
- 0.0
x4773:
- 0.0
- 0.0
- 0.0
x4774:
- 0.0
- 0.0
- 0.0
x4775:
- 0.0
- 0.0
- 0.0
x4776:
- 0.0
- 0.0
- 0.0
x4777:
- 0.0
- 0.0
- 0.0
x4778:
- 0.0
- 0.0
- 0.0
x4779:
- 0.0
- 0.0
- 0.0
x478:
- 0.0
- 0.0
- 0.0
x4780:
- 0.0
- 0.0
- 0.0
x4781:
- 0.0
- 0.0
- 0.0
x4782:
- 0.0
- 0.0
- 0.0
x4783:
- 0.0
- 0.0
- 0.0
x4784:
- 0.0
- 0.0
- 0.0
x4785:
- 0.0
- 0.0
- 0.0
x4786:
- 0.0
- 0.0
- 0.0
x4787:
- 0.0
- 0.0
- 0.0
x4788:
- 0.0
- 0.0
- 0.0
x4789:
- 0.0
- 0.0
- 0.0
x479:
- 0.0
- 0.0
- 0.0
x4790:
- 0.0
- 0.0
- 0.0
x4791:
- 0.0
- 0.0
- 0.0
x4792:
- 0.0
- 0.0
- 0.0
x4793:
- 0.0
- 0.0
- 0.0
x4794:
- 0.0
- 0.0
- 0.0
x4795:
- 0.0
- 0.0
- 0.0
x4796:
- 0.0
- 0.0
- 0.0
x4797:
- 0.0
- 0.0
- 0.0
x4798:
- 0.0
- 0.0
- 0.0
x4799:
- 0.0
- 0.0
- 0.0
x48:
- 0.0
- 0.0
- 0.0
x480:
- 0.0
- 0.0
- 0.0
x4800:
- 0.0
- 0.0
- 0.0
x4801:
- 0.0
- 0.0
- 0.0
x4802:
- 0.0
- 0.0
- 0.0
x4803:
- 0.0
- 0.0
- 0.0
x4804:
- 0.0
- 0.0
- 0.0
x4805:
- 0.0
- 0.0
- 0.0
x4806:
- 0.0
- 0.0
- 0.0
x4807:
- 0.0
- 0.0
- 0.0
x4808:
- 0.0
- 0.0
- 0.0
x4809:
- 0.0
- 0.0
- 0.0
x481:
- 0.0
- 0.0
- 0.0
x4810:
- 0.0
- 0.0
- 0.0
x4811:
- 0.0
- 0.0
- 0.0
x4812:
- 0.0
- 0.0
- 0.0
x4813:
- 0.0
- 0.0
- 0.0
x4814:
- 0.0
- 0.0
- 0.0
x4815:
- 0.0
- 0.0
- 0.0
x4816:
- 0.0
- 0.0
- 0.0
x4817:
- 0.0
- 0.0
- 0.0
x4818:
- 0.0
- 0.0
- 0.0
x4819:
- 0.0
- 0.0
- 0.0
x482:
- 0.0
- 0.0
- 0.0
x4820:
- 0.0
- 0.0
- 0.0
x4821:
- 0.0
- 0.0
- 0.0
x4822:
- 0.0
- 0.0
- 0.0
x4823:
- 0.0
- 0.0
- 0.0
x4824:
- 0.0
- 0.0
- 0.0
x4825:
- 0.0
- 0.0
- 0.0
x4826:
- 0.0
- 0.0
- 0.0
x4827:
- 0.0
- 0.0
- 0.0
x4828:
- 0.0
- 0.0
- 0.0
x4829:
- 0.0
- 0.0
- 0.0
x483:
- 0.0
- 0.0
- 0.0
x4830:
- 0.0
- 0.0
- 0.0
x4831:
- 0.0
- 0.0
- 0.0
x4832:
- 0.0
- 0.0
- 0.0
x4833:
- 0.0
- 0.0
- 0.0
x4834:
- 0.0
- 0.0
- 0.0
x4835:
- 0.0
- 0.0
- 0.0
x4836:
- 0.0
- 0.0
- 0.0
x4837:
- 0.0
- 0.0
- 0.0
x4838:
- 0.0
- 0.0
- 0.0
x4839:
- 0.0
- 0.0
- 0.0
x484:
- 0.0
- 0.0
- 0.0
x4840:
- 0.0
- 0.0
- 0.0
x4841:
- 0.0
- 0.0
- 0.0
x4842:
- 0.0
- 0.0
- 0.0
x4843:
- 0.0
- 0.0
- 0.0
x4844:
- 0.0
- 0.0
- 0.0
x4845:
- 0.0
- 0.0
- 0.0
x4846:
- 0.0
- 0.0
- 0.0
x4847:
- 0.0
- 0.0
- 0.0
x4848:
- 0.0
- 0.0
- 0.0
x4849:
- 0.0
- 0.0
- 0.0
x485:
- 0.0
- 0.0
- 0.0
x4850:
- 0.0
- 0.0
- 0.0
x4851:
- 0.0
- 0.0
- 0.0
x4852:
- 0.0
- 0.0
- 0.0
x4853:
- 0.0
- 0.0
- 0.0
x4854:
- 0.0
- 0.0
- 0.0
x4855:
- 0.0
- 0.0
- 0.0
x4856:
- 0.0
- 0.0
- 0.0
x4857:
- 0.0
- 0.0
- 0.0
x4858:
- 0.0
- 0.0
- 0.0
x4859:
- 0.0
- 0.0
- 0.0
x486:
- 0.0
- 0.0
- 0.0
x4860:
- 0.0
- 0.0
- 0.0
x4861:
- 0.0
- 0.0
- 0.0
x4862:
- 0.0
- 0.0
- 0.0
x4863:
- 0.0
- 0.0
- 0.0
x4864:
- 0.0
- 0.0
- 0.0
x4865:
- 0.0
- 0.0
- 0.0
x4866:
- 0.0
- 0.0
- 0.0
x4867:
- 0.0
- 0.0
- 0.0
x4868:
- 0.0
- 0.0
- 0.0
x4869:
- 0.0
- 0.0
- 0.0
x487:
- 0.0
- 0.0
- 0.0
x4870:
- 0.0
- 0.0
- 0.0
x4871:
- 0.0
- 0.0
- 0.0
x4872:
- 0.0
- 0.0
- 0.0
x4873:
- 0.0
- 0.0
- 0.0
x4874:
- 0.0
- 0.0
- 0.0
x4875:
- 0.0
- 0.0
- 0.0
x4876:
- 0.0
- 0.0
- 0.0
x4877:
- 0.0
- 0.0
- 0.0
x4878:
- 0.0
- 0.0
- 0.0
x4879:
- 0.0
- 0.0
- 0.0
x488:
- 0.0
- 0.0
- 0.0
x4880:
- 0.0
- 0.0
- 0.0
x4881:
- 0.0
- 0.0
- 0.0
x4882:
- 0.0
- 0.0
- 0.0
x4883:
- 0.0
- 0.0
- 0.0
x4884:
- 0.0
- 0.0
- 0.0
x4885:
- 0.0
- 0.0
- 0.0
x4886:
- 0.0
- 0.0
- 0.0
x4887:
- 0.0
- 0.0
- 0.0
x4888:
- 0.0
- 0.0
- 0.0
x4889:
- 0.0
- 0.0
- 0.0
x489:
- 0.0
- 0.0
- 0.0
x4890:
- 0.0
- 0.0
- 0.0
x4891:
- 0.0
- 0.0
- 0.0
x4892:
- 0.0
- 0.0
- 0.0
x4893:
- 0.0
- 0.0
- 0.0
x4894:
- 0.0
- 0.0
- 0.0
x4895:
- 0.0
- 0.0
- 0.0
x4896:
- 0.0
- 0.0
- 0.0
x4897:
- 0.0
- 0.0
- 0.0
x4898:
- 0.0
- 0.0
- 0.0
x4899:
- 0.0
- 0.0
- 0.0
x49:
- 0.0
- 0.0
- 0.0
x490:
- 0.0
- 0.0
- 0.0
x4900:
- 0.0
- 0.0
- 0.0
x4901:
- 0.0
- 0.0
- 0.0
x4902:
- 0.0
- 0.0
- 0.0
x4903:
- 0.0
- 0.0
- 0.0
x4904:
- 0.0
- 0.0
- 0.0
x4905:
- 0.0
- 0.0
- 0.0
x4906:
- 0.0
- 0.0
- 0.0
x4907:
- 0.0
- 0.0
- 0.0
x4908:
- 0.0
- 0.0
- 0.0
x4909:
- 0.0
- 0.0
- 0.0
x491:
- 0.0
- 0.0
- 0.0
x4910:
- 0.0
- 0.0
- 0.0
x4911:
- 0.0
- 0.0
- 0.0
x4912:
- 0.0
- 0.0
- 0.0
x4913:
- 0.0
- 0.0
- 0.0
x4914:
- 0.0
- 0.0
- 0.0
x4915:
- 0.0
- 0.0
- 0.0
x4916:
- 0.0
- 0.0
- 0.0
x4917:
- 0.0
- 0.0
- 0.0
x4918:
- 0.0
- 0.0
- 0.0
x4919:
- 0.0
- 0.0
- 0.0
x492:
- 0.0
- 0.0
- 0.0
x4920:
- 0.0
- 0.0
- 0.0
x4921:
- 0.0
- 0.0
- 0.0
x4922:
- 0.0
- 0.0
- 0.0
x4923:
- 0.0
- 0.0
- 0.0
x4924:
- 0.0
- 0.0
- 0.0
x4925:
- 0.0
- 0.0
- 0.0
x4926:
- 0.0
- 0.0
- 0.0
x4927:
- 0.0
- 0.0
- 0.0
x4928:
- 0.0
- 0.0
- 0.0
x4929:
- 0.0
- 0.0
- 0.0
x493:
- 0.0
- 0.0
- 0.0
x4930:
- 0.0
- 0.0
- 0.0
x4931:
- 0.0
- 0.0
- 0.0
x4932:
- 0.0
- 0.0
- 0.0
x4933:
- 0.0
- 0.0
- 0.0
x4934:
- 0.0
- 0.0
- 0.0
x4935:
- 0.0
- 0.0
- 0.0
x4936:
- 0.0
- 0.0
- 0.0
x4937:
- 0.0
- 0.0
- 0.0
x4938:
- 0.0
- 0.0
- 0.0
x4939:
- 0.0
- 0.0
- 0.0
x494:
- 0.0
- 0.0
- 0.0
x4940:
- 0.0
- 0.0
- 0.0
x4941:
- 0.0
- 0.0
- 0.0
x4942:
- 0.0
- 0.0
- 0.0
x4943:
- 0.0
- 0.0
- 0.0
x4944:
- 0.0
- 0.0
- 0.0
x4945:
- 0.0
- 0.0
- 0.0
x4946:
- 0.0
- 0.0
- 0.0
x4947:
- 0.0
- 0.0
- 0.0
x4948:
- 0.0
- 0.0
- 0.0
x4949:
- 0.0
- 0.0
- 0.0
x495:
- 0.0
- 0.0
- 0.0
x4950:
- 0.0
- 0.0
- 0.0
x4951:
- 0.0
- 0.0
- 0.0
x4952:
- 0.0
- 0.0
- 0.0
x4953:
- 0.0
- 0.0
- 0.0
x4954:
- 0.0
- 0.0
- 0.0
x4955:
- 0.0
- 0.0
- 0.0
x4956:
- 0.0
- 0.0
- 0.0
x4957:
- 0.0
- 0.0
- 0.0
x4958:
- 0.0
- 0.0
- 0.0
x4959:
- 0.0
- 0.0
- 0.0
x496:
- 0.0
- 0.0
- 0.0
x4960:
- 0.0
- 0.0
- 0.0
x4961:
- 0.0
- 0.0
- 0.0
x4962:
- 0.0
- 0.0
- 0.0
x4963:
- 0.0
- 0.0
- 0.0
x4964:
- 0.0
- 0.0
- 0.0
x4965:
- 0.0
- 0.0
- 0.0
x4966:
- 0.0
- 0.0
- 0.0
x4967:
- 0.0
- 0.0
- 0.0
x4968:
- 0.0
- 0.0
- 0.0
x4969:
- 0.0
- 0.0
- 0.0
x497:
- 0.0
- 0.0
- 0.0
x4970:
- 0.0
- 0.0
- 0.0
x4971:
- 0.0
- 0.0
- 0.0
x4972:
- 0.0
- 0.0
- 0.0
x4973:
- 0.0
- 0.0
- 0.0
x4974:
- 0.0
- 0.0
- 0.0
x4975:
- 0.0
- 0.0
- 0.0
x4976:
- 0.0
- 0.0
- 0.0
x4977:
- 0.0
- 0.0
- 0.0
x4978:
- 0.0
- 0.0
- 0.0
x4979:
- 0.0
- 0.0
- 0.0
x498:
- 0.0
- 0.0
- 0.0
x4980:
- 0.0
- 0.0
- 0.0
x4981:
- 0.0
- 0.0
- 0.0
x4982:
- 0.0
- 0.0
- 0.0
x4983:
- 0.0
- 0.0
- 0.0
x4984:
- 0.0
- 0.0
- 0.0
x4985:
- 0.0
- 0.0
- 0.0
x4986:
- 0.0
- 0.0
- 0.0
x4987:
- 0.0
- 0.0
- 0.0
x4988:
- 0.0
- 0.0
- 0.0
x4989:
- 0.0
- 0.0
- 0.0
x499:
- 0.0
- 0.0
- 0.0
x4990:
- 0.0
- 0.0
- 0.0
x4991:
- 0.0
- 0.0
- 0.0
x4992:
- 0.0
- 0.0
- 0.0
x4993:
- 0.0
- 0.0
- 0.0
x4994:
- 0.0
- 0.0
- 0.0
x4995:
- 0.0
- 0.0
- 0.0
x4996:
- 0.0
- 0.0
- 0.0
x4997:
- 0.0
- 0.0
- 0.0
x4998:
- 0.0
- 0.0
- 0.0
x4999:
- 0.0
- 0.0
- 0.0
x5:
- 0.0
- 0.0
- 0.0
x50:
- 0.0
- 0.0
- 0.0
x500:
- 0.0
- 0.0
- 0.0
x5000:
- 0.0
- 0.0
- 0.0
x5001:
- 0.0
- 0.0
- 0.0
x5002:
- 0.0
- 0.0
- 0.0
x5003:
- 0.0
- 0.0
- 0.0
x5004:
- 0.0
- 0.0
- 0.0
x5005:
- 0.0
- 0.0
- 0.0
x5006:
- 0.0
- 0.0
- 0.0
x5007:
- 0.0
- 0.0
- 0.0
x5008:
- 0.0
- 0.0
- 0.0
x5009:
- 0.0
- 0.0
- 0.0
x501:
- 0.0
- 0.0
- 0.0
x5010:
- 0.0
- 0.0
- 0.0
x5011:
- 0.0
- 0.0
- 0.0
x5012:
- 0.0
- 0.0
- 0.0
x5013:
- 0.0
- 0.0
- 0.0
x5014:
- 0.0
- 0.0
- 0.0
x5015:
- 0.0
- 0.0
- 0.0
x5016:
- 0.0
- 0.0
- 0.0
x5017:
- 0.0
- 0.0
- 0.0
x5018:
- 0.0
- 0.0
- 0.0
x5019:
- 0.0
- 0.0
- 0.0
x502:
- 0.0
- 0.0
- 0.0
x5020:
- 0.0
- 0.0
- 0.0
x5021:
- 0.0
- 0.0
- 0.0
x5022:
- 0.0
- 0.0
- 0.0
x5023:
- 0.0
- 0.0
- 0.0
x5024:
- 0.0
- 0.0
- 0.0
x5025:
- 0.0
- 0.0
- 0.0
x5026:
- 0.0
- 0.0
- 0.0
x5027:
- 0.0
- 0.0
- 0.0
x5028:
- 0.0
- 0.0
- 0.0
x5029:
- 0.0
- 0.0
- 0.0
x503:
- 0.0
- 0.0
- 0.0
x5030:
- 0.0
- 0.0
- 0.0
x5031:
- 0.0
- 0.0
- 0.0
x5032:
- 0.0
- 0.0
- 0.0
x5033:
- 0.0
- 0.0
- 0.0
x5034:
- 0.0
- 0.0
- 0.0
x5035:
- 0.0
- 0.0
- 0.0
x5036:
- 0.0
- 0.0
- 0.0
x5037:
- 0.0
- 0.0
- 0.0
x5038:
- 0.0
- 0.0
- 0.0
x5039:
- 0.0
- 0.0
- 0.0
x504:
- 0.0
- 0.0
- 0.0
x5040:
- 0.0
- 0.0
- 0.0
x5041:
- 0.0
- 0.0
- 0.0
x5042:
- 0.0
- 0.0
- 0.0
x5043:
- 0.0
- 0.0
- 0.0
x5044:
- 0.0
- 0.0
- 0.0
x5045:
- 0.0
- 0.0
- 0.0
x5046:
- 0.0
- 0.0
- 0.0
x5047:
- 0.0
- 0.0
- 0.0
x5048:
- 0.0
- 0.0
- 0.0
x5049:
- 0.0
- 0.0
- 0.0
x505:
- 0.0
- 0.0
- 0.0
x5050:
- 0.0
- 0.0
- 0.0
x5051:
- 0.0
- 0.0
- 0.0
x5052:
- 0.0
- 0.0
- 0.0
x5053:
- 0.0
- 0.0
- 0.0
x5054:
- 0.0
- 0.0
- 0.0
x5055:
- 0.0
- 0.0
- 0.0
x5056:
- 0.0
- 0.0
- 0.0
x5057:
- 0.0
- 0.0
- 0.0
x5058:
- 0.0
- 0.0
- 0.0
x5059:
- 0.0
- 0.0
- 0.0
x506:
- 0.0
- 0.0
- 0.0
x5060:
- 0.0
- 0.0
- 0.0
x5061:
- 0.0
- 0.0
- 0.0
x5062:
- 0.0
- 0.0
- 0.0
x5063:
- 0.0
- 0.0
- 0.0
x5064:
- 0.0
- 0.0
- 0.0
x5065:
- 0.0
- 0.0
- 0.0
x5066:
- 0.0
- 0.0
- 0.0
x5067:
- 0.0
- 0.0
- 0.0
x5068:
- 0.0
- 0.0
- 0.0
x5069:
- 0.0
- 0.0
- 0.0
x507:
- 0.0
- 0.0
- 0.0
x5070:
- 0.0
- 0.0
- 0.0
x5071:
- 0.0
- 0.0
- 0.0
x5072:
- 0.0
- 0.0
- 0.0
x5073:
- 0.0
- 0.0
- 0.0
x5074:
- 0.0
- 0.0
- 0.0
x5075:
- 0.0
- 0.0
- 0.0
x5076:
- 0.0
- 0.0
- 0.0
x5077:
- 0.0
- 0.0
- 0.0
x5078:
- 0.0
- 0.0
- 0.0
x5079:
- 0.0
- 0.0
- 0.0
x508:
- 0.0
- 0.0
- 0.0
x5080:
- 0.0
- 0.0
- 0.0
x5081:
- 0.0
- 0.0
- 0.0
x5082:
- 0.0
- 0.0
- 0.0
x5083:
- 0.0
- 0.0
- 0.0
x5084:
- 0.0
- 0.0
- 0.0
x5085:
- 0.0
- 0.0
- 0.0
x5086:
- 0.0
- 0.0
- 0.0
x5087:
- 0.0
- 0.0
- 0.0
x5088:
- 0.0
- 0.0
- 0.0
x5089:
- 0.0
- 0.0
- 0.0
x509:
- 0.0
- 0.0
- 0.0
x5090:
- 0.0
- 0.0
- 0.0
x5091:
- 0.0
- 0.0
- 0.0
x5092:
- 0.0
- 0.0
- 0.0
x5093:
- 0.0
- 0.0
- 0.0
x5094:
- 0.0
- 0.0
- 0.0
x5095:
- 0.0
- 0.0
- 0.0
x5096:
- 0.0
- 0.0
- 0.0
x5097:
- 0.0
- 0.0
- 0.0
x5098:
- 0.0
- 0.0
- 0.0
x5099:
- 0.0
- 0.0
- 0.0
x51:
- 0.0
- 0.0
- 0.0
x510:
- 0.0
- 0.0
- 0.0
x5100:
- 0.0
- 0.0
- 0.0
x5101:
- 0.0
- 0.0
- 0.0
x5102:
- 0.0
- 0.0
- 0.0
x5103:
- 0.0
- 0.0
- 0.0
x5104:
- 0.0
- 0.0
- 0.0
x5105:
- 0.0
- 0.0
- 0.0
x5106:
- 0.0
- 0.0
- 0.0
x5107:
- 0.0
- 0.0
- 0.0
x5108:
- 0.0
- 0.0
- 0.0
x5109:
- 0.0
- 0.0
- 0.0
x511:
- 0.0
- 0.0
- 0.0
x5110:
- 0.0
- 0.0
- 0.0
x5111:
- 0.0
- 0.0
- 0.0
x5112:
- 0.0
- 0.0
- 0.0
x5113:
- 0.0
- 0.0
- 0.0
x5114:
- 0.0
- 0.0
- 0.0
x5115:
- 0.0
- 0.0
- 0.0
x5116:
- 0.0
- 0.0
- 0.0
x5117:
- 0.0
- 0.0
- 0.0
x5118:
- 0.0
- 0.0
- 0.0
x5119:
- 0.0
- 0.0
- 0.0
x512:
- 0.0
- 0.0
- 0.0
x5120:
- 0.0
- 0.0
- 0.0
x5121:
- 0.0
- 0.0
- 0.0
x5122:
- 0.0
- 0.0
- 0.0
x5123:
- 0.0
- 0.0
- 0.0
x5124:
- 0.0
- 0.0
- 0.0
x5125:
- 0.0
- 0.0
- 0.0
x5126:
- 0.0
- 0.0
- 0.0
x5127:
- 0.0
- 0.0
- 0.0
x5128:
- 0.0
- 0.0
- 0.0
x5129:
- 0.0
- 0.0
- 0.0
x513:
- 0.0
- 0.0
- 0.0
x5130:
- 0.0
- 0.0
- 0.0
x5131:
- 0.0
- 0.0
- 0.0
x5132:
- 0.0
- 0.0
- 0.0
x5133:
- 0.0
- 0.0
- 0.0
x5134:
- 0.0
- 0.0
- 0.0
x5135:
- 0.0
- 0.0
- 0.0
x5136:
- 0.0
- 0.0
- 0.0
x5137:
- 0.0
- 0.0
- 0.0
x5138:
- 0.0
- 0.0
- 0.0
x5139:
- 0.0
- 0.0
- 0.0
x514:
- 0.0
- 0.0
- 0.0
x5140:
- 0.0
- 0.0
- 0.0
x5141:
- 0.0
- 0.0
- 0.0
x5142:
- 0.0
- 0.0
- 0.0
x5143:
- 0.0
- 0.0
- 0.0
x5144:
- 0.0
- 0.0
- 0.0
x5145:
- 0.0
- 0.0
- 0.0
x5146:
- 0.0
- 0.0
- 0.0
x5147:
- 0.0
- 0.0
- 0.0
x5148:
- 0.0
- 0.0
- 0.0
x5149:
- 0.0
- 0.0
- 0.0
x515:
- 0.0
- 0.0
- 0.0
x5150:
- 0.0
- 0.0
- 0.0
x5151:
- 0.0
- 0.0
- 0.0
x5152:
- 0.0
- 0.0
- 0.0
x5153:
- 0.0
- 0.0
- 0.0
x5154:
- 0.0
- 0.0
- 0.0
x5155:
- 0.0
- 0.0
- 0.0
x5156:
- 0.0
- 0.0
- 0.0
x5157:
- 0.0
- 0.0
- 0.0
x5158:
- 0.0
- 0.0
- 0.0
x5159:
- 0.0
- 0.0
- 0.0
x516:
- 0.0
- 0.0
- 0.0
x5160:
- 0.0
- 0.0
- 0.0
x5161:
- 0.0
- 0.0
- 0.0
x5162:
- 0.0
- 0.0
- 0.0
x5163:
- 0.0
- 0.0
- 0.0
x5164:
- 0.0
- 0.0
- 0.0
x5165:
- 0.0
- 0.0
- 0.0
x5166:
- 0.0
- 0.0
- 0.0
x5167:
- 0.0
- 0.0
- 0.0
x5168:
- 0.0
- 0.0
- 0.0
x5169:
- 0.0
- 0.0
- 0.0
x517:
- 0.0
- 0.0
- 0.0
x5170:
- 0.0
- 0.0
- 0.0
x5171:
- 0.0
- 0.0
- 0.0
x5172:
- 0.0
- 0.0
- 0.0
x5173:
- 0.0
- 0.0
- 0.0
x5174:
- 0.0
- 0.0
- 0.0
x5175:
- 0.0
- 0.0
- 0.0
x5176:
- 0.0
- 0.0
- 0.0
x5177:
- 0.0
- 0.0
- 0.0
x5178:
- 0.0
- 0.0
- 0.0
x5179:
- 0.0
- 0.0
- 0.0
x518:
- 0.0
- 0.0
- 0.0
x5180:
- 0.0
- 0.0
- 0.0
x5181:
- 0.0
- 0.0
- 0.0
x5182:
- 0.0
- 0.0
- 0.0
x5183:
- 0.0
- 0.0
- 0.0
x5184:
- 0.0
- 0.0
- 0.0
x5185:
- 0.0
- 0.0
- 0.0
x5186:
- 0.0
- 0.0
- 0.0
x5187:
- 0.0
- 0.0
- 0.0
x5188:
- 0.0
- 0.0
- 0.0
x5189:
- 0.0
- 0.0
- 0.0
x519:
- 0.0
- 0.0
- 0.0
x5190:
- 0.0
- 0.0
- 0.0
x5191:
- 0.0
- 0.0
- 0.0
x5192:
- 0.0
- 0.0
- 0.0
x5193:
- 0.0
- 0.0
- 0.0
x5194:
- 0.0
- 0.0
- 0.0
x5195:
- 0.0
- 0.0
- 0.0
x5196:
- 0.0
- 0.0
- 0.0
x5197:
- 0.0
- 0.0
- 0.0
x5198:
- 0.0
- 0.0
- 0.0
x5199:
- 0.0
- 0.0
- 0.0
x52:
- 0.0
- 0.0
- 0.0
x520:
- 0.0
- 0.0
- 0.0
x5200:
- 0.0
- 0.0
- 0.0
x5201:
- 0.0
- 0.0
- 0.0
x5202:
- 0.0
- 0.0
- 0.0
x5203:
- 0.0
- 0.0
- 0.0
x5204:
- 0.0
- 0.0
- 0.0
x5205:
- 0.0
- 0.0
- 0.0
x5206:
- 0.0
- 0.0
- 0.0
x5207:
- 0.0
- 0.0
- 0.0
x5208:
- 0.0
- 0.0
- 0.0
x5209:
- 0.0
- 0.0
- 0.0
x521:
- 0.0
- 0.0
- 0.0
x5210:
- 0.0
- 0.0
- 0.0
x5211:
- 0.0
- 0.0
- 0.0
x5212:
- 0.0
- 0.0
- 0.0
x5213:
- 0.0
- 0.0
- 0.0
x5214:
- 0.0
- 0.0
- 0.0
x5215:
- 0.0
- 0.0
- 0.0
x5216:
- 0.0
- 0.0
- 0.0
x5217:
- 0.0
- 0.0
- 0.0
x5218:
- 0.0
- 0.0
- 0.0
x5219:
- 0.0
- 0.0
- 0.0
x522:
- 0.0
- 0.0
- 0.0
x5220:
- 0.0
- 0.0
- 0.0
x5221:
- 0.0
- 0.0
- 0.0
x5222:
- 0.0
- 0.0
- 0.0
x5223:
- 0.0
- 0.0
- 0.0
x5224:
- 0.0
- 0.0
- 0.0
x5225:
- 0.0
- 0.0
- 0.0
x5226:
- 0.0
- 0.0
- 0.0
x5227:
- 0.0
- 0.0
- 0.0
x5228:
- 0.0
- 0.0
- 0.0
x5229:
- 0.0
- 0.0
- 0.0
x523:
- 0.0
- 0.0
- 0.0
x5230:
- 0.0
- 0.0
- 0.0
x5231:
- 0.0
- 0.0
- 0.0
x5232:
- 0.0
- 0.0
- 0.0
x5233:
- 0.0
- 0.0
- 0.0
x5234:
- 0.0
- 0.0
- 0.0
x5235:
- 0.0
- 0.0
- 0.0
x5236:
- 0.0
- 0.0
- 0.0
x5237:
- 0.0
- 0.0
- 0.0
x5238:
- 0.0
- 0.0
- 0.0
x5239:
- 0.0
- 0.0
- 0.0
x524:
- 0.0
- 0.0
- 0.0
x5240:
- 0.0
- 0.0
- 0.0
x5241:
- 0.0
- 0.0
- 0.0
x5242:
- 0.0
- 0.0
- 0.0
x5243:
- 0.0
- 0.0
- 0.0
x5244:
- 0.0
- 0.0
- 0.0
x5245:
- 0.0
- 0.0
- 0.0
x5246:
- 0.0
- 0.0
- 0.0
x5247:
- 0.0
- 0.0
- 0.0
x5248:
- 0.0
- 0.0
- 0.0
x5249:
- 0.0
- 0.0
- 0.0
x525:
- 0.0
- 0.0
- 0.0
x5250:
- 0.0
- 0.0
- 0.0
x5251:
- 0.0
- 0.0
- 0.0
x5252:
- 0.0
- 0.0
- 0.0
x5253:
- 0.0
- 0.0
- 0.0
x5254:
- 0.0
- 0.0
- 0.0
x5255:
- 0.0
- 0.0
- 0.0
x5256:
- 0.0
- 0.0
- 0.0
x5257:
- 0.0
- 0.0
- 0.0
x5258:
- 0.0
- 0.0
- 0.0
x5259:
- 0.0
- 0.0
- 0.0
x526:
- 0.0
- 0.0
- 0.0
x5260:
- 0.0
- 0.0
- 0.0
x5261:
- 0.0
- 0.0
- 0.0
x5262:
- 0.0
- 0.0
- 0.0
x5263:
- 0.0
- 0.0
- 0.0
x5264:
- 0.0
- 0.0
- 0.0
x5265:
- 0.0
- 0.0
- 0.0
x5266:
- 0.0
- 0.0
- 0.0
x5267:
- 0.0
- 0.0
- 0.0
x5268:
- 0.0
- 0.0
- 0.0
x5269:
- 0.0
- 0.0
- 0.0
x527:
- 0.0
- 0.0
- 0.0
x5270:
- 0.0
- 0.0
- 0.0
x5271:
- 0.0
- 0.0
- 0.0
x5272:
- 0.0
- 0.0
- 0.0
x5273:
- 0.0
- 0.0
- 0.0
x5274:
- 0.0
- 0.0
- 0.0
x5275:
- 0.0
- 0.0
- 0.0
x5276:
- 0.0
- 0.0
- 0.0
x5277:
- 0.0
- 0.0
- 0.0
x5278:
- 0.0
- 0.0
- 0.0
x5279:
- 0.0
- 0.0
- 0.0
x528:
- 0.0
- 0.0
- 0.0
x5280:
- 0.0
- 0.0
- 0.0
x5281:
- 0.0
- 0.0
- 0.0
x5282:
- 0.0
- 0.0
- 0.0
x5283:
- 0.0
- 0.0
- 0.0
x5284:
- 0.0
- 0.0
- 0.0
x5285:
- 0.0
- 0.0
- 0.0
x5286:
- 0.0
- 0.0
- 0.0
x5287:
- 0.0
- 0.0
- 0.0
x5288:
- 0.0
- 0.0
- 0.0
x5289:
- 0.0
- 0.0
- 0.0
x529:
- 0.0
- 0.0
- 0.0
x5290:
- 0.0
- 0.0
- 0.0
x5291:
- 0.0
- 0.0
- 0.0
x5292:
- 0.0
- 0.0
- 0.0
x5293:
- 0.0
- 0.0
- 0.0
x5294:
- 0.0
- 0.0
- 0.0
x5295:
- 0.0
- 0.0
- 0.0
x5296:
- 0.0
- 0.0
- 0.0
x5297:
- 0.0
- 0.0
- 0.0
x5298:
- 0.0
- 0.0
- 0.0
x5299:
- 0.0
- 0.0
- 0.0
x53:
- 0.0
- 0.0
- 0.0
x530:
- 0.0
- 0.0
- 0.0
x5300:
- 0.0
- 0.0
- 0.0
x5301:
- 0.0
- 0.0
- 0.0
x5302:
- 0.0
- 0.0
- 0.0
x5303:
- 0.0
- 0.0
- 0.0
x5304:
- 0.0
- 0.0
- 0.0
x5305:
- 0.0
- 0.0
- 0.0
x5306:
- 0.0
- 0.0
- 0.0
x5307:
- 0.0
- 0.0
- 0.0
x5308:
- 0.0
- 0.0
- 0.0
x5309:
- 0.0
- 0.0
- 0.0
x531:
- 0.0
- 0.0
- 0.0
x5310:
- 0.0
- 0.0
- 0.0
x5311:
- 0.0
- 0.0
- 0.0
x5312:
- 0.0
- 0.0
- 0.0
x5313:
- 0.0
- 0.0
- 0.0
x5314:
- 0.0
- 0.0
- 0.0
x5315:
- 0.0
- 0.0
- 0.0
x5316:
- 0.0
- 0.0
- 0.0
x5317:
- 0.0
- 0.0
- 0.0
x5318:
- 0.0
- 0.0
- 0.0
x5319:
- 0.0
- 0.0
- 0.0
x532:
- 0.0
- 0.0
- 0.0
x5320:
- 0.0
- 0.0
- 0.0
x5321:
- 0.0
- 0.0
- 0.0
x5322:
- 0.0
- 0.0
- 0.0
x5323:
- 0.0
- 0.0
- 0.0
x5324:
- 0.0
- 0.0
- 0.0
x5325:
- 0.0
- 0.0
- 0.0
x5326:
- 0.0
- 0.0
- 0.0
x5327:
- 0.0
- 0.0
- 0.0
x5328:
- 0.0
- 0.0
- 0.0
x5329:
- 0.0
- 0.0
- 0.0
x533:
- 0.0
- 0.0
- 0.0
x5330:
- 0.0
- 0.0
- 0.0
x5331:
- 0.0
- 0.0
- 0.0
x5332:
- 0.0
- 0.0
- 0.0
x5333:
- 0.0
- 0.0
- 0.0
x5334:
- 0.0
- 0.0
- 0.0
x5335:
- 0.0
- 0.0
- 0.0
x5336:
- 0.0
- 0.0
- 0.0
x5337:
- 0.0
- 0.0
- 0.0
x5338:
- 0.0
- 0.0
- 0.0
x5339:
- 0.0
- 0.0
- 0.0
x534:
- 0.0
- 0.0
- 0.0
x5340:
- 0.0
- 0.0
- 0.0
x5341:
- 0.0
- 0.0
- 0.0
x5342:
- 0.0
- 0.0
- 0.0
x5343:
- 0.0
- 0.0
- 0.0
x5344:
- 0.0
- 0.0
- 0.0
x5345:
- 0.0
- 0.0
- 0.0
x5346:
- 0.0
- 0.0
- 0.0
x5347:
- 0.0
- 0.0
- 0.0
x5348:
- 0.0
- 0.0
- 0.0
x5349:
- 0.0
- 0.0
- 0.0
x535:
- 0.0
- 0.0
- 0.0
x5350:
- 0.0
- 0.0
- 0.0
x5351:
- 0.0
- 0.0
- 0.0
x5352:
- 0.0
- 0.0
- 0.0
x5353:
- 0.0
- 0.0
- 0.0
x5354:
- 0.0
- 0.0
- 0.0
x5355:
- 0.0
- 0.0
- 0.0
x5356:
- 0.0
- 0.0
- 0.0
x5357:
- 0.0
- 0.0
- 0.0
x5358:
- 0.0
- 0.0
- 0.0
x5359:
- 0.0
- 0.0
- 0.0
x536:
- 0.0
- 0.0
- 0.0
x5360:
- 0.0
- 0.0
- 0.0
x5361:
- 0.0
- 0.0
- 0.0
x5362:
- 0.0
- 0.0
- 0.0
x5363:
- 0.0
- 0.0
- 0.0
x5364:
- 0.0
- 0.0
- 0.0
x5365:
- 0.0
- 0.0
- 0.0
x5366:
- 0.0
- 0.0
- 0.0
x5367:
- 0.0
- 0.0
- 0.0
x5368:
- 0.0
- 0.0
- 0.0
x5369:
- 0.0
- 0.0
- 0.0
x537:
- 0.0
- 0.0
- 0.0
x5370:
- 0.0
- 0.0
- 0.0
x5371:
- 0.0
- 0.0
- 0.0
x5372:
- 0.0
- 0.0
- 0.0
x5373:
- 0.0
- 0.0
- 0.0
x5374:
- 0.0
- 0.0
- 0.0
x5375:
- 0.0
- 0.0
- 0.0
x5376:
- 0.0
- 0.0
- 0.0
x5377:
- 0.0
- 0.0
- 0.0
x5378:
- 0.0
- 0.0
- 0.0
x5379:
- 0.0
- 0.0
- 0.0
x538:
- 0.0
- 0.0
- 0.0
x5380:
- 0.0
- 0.0
- 0.0
x5381:
- 0.0
- 0.0
- 0.0
x5382:
- 0.0
- 0.0
- 0.0
x5383:
- 0.0
- 0.0
- 0.0
x5384:
- 0.0
- 0.0
- 0.0
x5385:
- 0.0
- 0.0
- 0.0
x5386:
- 0.0
- 0.0
- 0.0
x5387:
- 0.0
- 0.0
- 0.0
x5388:
- 0.0
- 0.0
- 0.0
x5389:
- 0.0
- 0.0
- 0.0
x539:
- 0.0
- 0.0
- 0.0
x5390:
- 0.0
- 0.0
- 0.0
x5391:
- 0.0
- 0.0
- 0.0
x5392:
- 0.0
- 0.0
- 0.0
x5393:
- 0.0
- 0.0
- 0.0
x5394:
- 0.0
- 0.0
- 0.0
x5395:
- 0.0
- 0.0
- 0.0
x5396:
- 0.0
- 0.0
- 0.0
x5397:
- 0.0
- 0.0
- 0.0
x5398:
- 0.0
- 0.0
- 0.0
x5399:
- 0.0
- 0.0
- 0.0
x54:
- 0.0
- 0.0
- 0.0
x540:
- 0.0
- 0.0
- 0.0
x5400:
- 0.0
- 0.0
- 0.0
x5401:
- 0.0
- 0.0
- 0.0
x5402:
- 0.0
- 0.0
- 0.0
x5403:
- 0.0
- 0.0
- 0.0
x5404:
- 0.0
- 0.0
- 0.0
x5405:
- 0.0
- 0.0
- 0.0
x5406:
- 0.0
- 0.0
- 0.0
x5407:
- 0.0
- 0.0
- 0.0
x5408:
- 0.0
- 0.0
- 0.0
x5409:
- 0.0
- 0.0
- 0.0
x541:
- 0.0
- 0.0
- 0.0
x5410:
- 0.0
- 0.0
- 0.0
x5411:
- 0.0
- 0.0
- 0.0
x5412:
- 0.0
- 0.0
- 0.0
x5413:
- 0.0
- 0.0
- 0.0
x5414:
- 0.0
- 0.0
- 0.0
x5415:
- 0.0
- 0.0
- 0.0
x5416:
- 0.0
- 0.0
- 0.0
x5417:
- 0.0
- 0.0
- 0.0
x5418:
- 0.0
- 0.0
- 0.0
x5419:
- 0.0
- 0.0
- 0.0
x542:
- 0.0
- 0.0
- 0.0
x5420:
- 0.0
- 0.0
- 0.0
x5421:
- 0.0
- 0.0
- 0.0
x5422:
- 0.0
- 0.0
- 0.0
x5423:
- 0.0
- 0.0
- 0.0
x5424:
- 0.0
- 0.0
- 0.0
x5425:
- 0.0
- 0.0
- 0.0
x5426:
- 0.0
- 0.0
- 0.0
x5427:
- 0.0
- 0.0
- 0.0
x5428:
- 0.0
- 0.0
- 0.0
x5429:
- 0.0
- 0.0
- 0.0
x543:
- 0.0
- 0.0
- 0.0
x5430:
- 0.0
- 0.0
- 0.0
x5431:
- 0.0
- 0.0
- 0.0
x5432:
- 0.0
- 0.0
- 0.0
x5433:
- 0.0
- 0.0
- 0.0
x5434:
- 0.0
- 0.0
- 0.0
x5435:
- 0.0
- 0.0
- 0.0
x5436:
- 0.0
- 0.0
- 0.0
x5437:
- 0.0
- 0.0
- 0.0
x5438:
- 0.0
- 0.0
- 0.0
x5439:
- 0.0
- 0.0
- 0.0
x544:
- 0.0
- 0.0
- 0.0
x5440:
- 0.0
- 0.0
- 0.0
x5441:
- 0.0
- 0.0
- 0.0
x5442:
- 0.0
- 0.0
- 0.0
x5443:
- 0.0
- 0.0
- 0.0
x5444:
- 0.0
- 0.0
- 0.0
x5445:
- 0.0
- 0.0
- 0.0
x5446:
- 0.0
- 0.0
- 0.0
x5447:
- 0.0
- 0.0
- 0.0
x5448:
- 0.0
- 0.0
- 0.0
x5449:
- 0.0
- 0.0
- 0.0
x545:
- 0.0
- 0.0
- 0.0
x5450:
- 0.0
- 0.0
- 0.0
x5451:
- 0.0
- 0.0
- 0.0
x5452:
- 0.0
- 0.0
- 0.0
x5453:
- 0.0
- 0.0
- 0.0
x5454:
- 0.0
- 0.0
- 0.0
x5455:
- 0.0
- 0.0
- 0.0
x5456:
- 0.0
- 0.0
- 0.0
x5457:
- 0.0
- 0.0
- 0.0
x5458:
- 0.0
- 0.0
- 0.0
x5459:
- 0.0
- 0.0
- 0.0
x546:
- 0.0
- 0.0
- 0.0
x5460:
- 0.0
- 0.0
- 0.0
x5461:
- 0.0
- 0.0
- 0.0
x5462:
- 0.0
- 0.0
- 0.0
x5463:
- 0.0
- 0.0
- 0.0
x5464:
- 0.0
- 0.0
- 0.0
x5465:
- 0.0
- 0.0
- 0.0
x5466:
- 0.0
- 0.0
- 0.0
x5467:
- 0.0
- 0.0
- 0.0
x5468:
- 0.0
- 0.0
- 0.0
x5469:
- 0.0
- 0.0
- 0.0
x547:
- 0.0
- 0.0
- 0.0
x5470:
- 0.0
- 0.0
- 0.0
x5471:
- 0.0
- 0.0
- 0.0
x5472:
- 0.0
- 0.0
- 0.0
x5473:
- 0.0
- 0.0
- 0.0
x5474:
- 0.0
- 0.0
- 0.0
x5475:
- 0.0
- 0.0
- 0.0
x5476:
- 0.0
- 0.0
- 0.0
x5477:
- 0.0
- 0.0
- 0.0
x5478:
- 0.0
- 0.0
- 0.0
x5479:
- 0.0
- 0.0
- 0.0
x548:
- 0.0
- 0.0
- 0.0
x5480:
- 0.0
- 0.0
- 0.0
x5481:
- 0.0
- 0.0
- 0.0
x5482:
- 0.0
- 0.0
- 0.0
x5483:
- 0.0
- 0.0
- 0.0
x5484:
- 0.0
- 0.0
- 0.0
x5485:
- 0.0
- 0.0
- 0.0
x5486:
- 0.0
- 0.0
- 0.0
x5487:
- 0.0
- 0.0
- 0.0
x5488:
- 0.0
- 0.0
- 0.0
x5489:
- 0.0
- 0.0
- 0.0
x549:
- 0.0
- 0.0
- 0.0
x5490:
- 0.0
- 0.0
- 0.0
x5491:
- 0.0
- 0.0
- 0.0
x5492:
- 0.0
- 0.0
- 0.0
x5493:
- 0.0
- 0.0
- 0.0
x5494:
- 0.0
- 0.0
- 0.0
x5495:
- 0.0
- 0.0
- 0.0
x5496:
- 0.0
- 0.0
- 0.0
x5497:
- 0.0
- 0.0
- 0.0
x5498:
- 0.0
- 0.0
- 0.0
x5499:
- 0.0
- 0.0
- 0.0
x55:
- 0.0
- 0.0
- 0.0
x550:
- 0.0
- 0.0
- 0.0
x5500:
- 0.0
- 0.0
- 0.0
x5501:
- 0.0
- 0.0
- 0.0
x5502:
- 0.0
- 0.0
- 0.0
x5503:
- 0.0
- 0.0
- 0.0
x5504:
- 0.0
- 0.0
- 0.0
x5505:
- 0.0
- 0.0
- 0.0
x5506:
- 0.0
- 0.0
- 0.0
x5507:
- 0.0
- 0.0
- 0.0
x5508:
- 0.0
- 0.0
- 0.0
x5509:
- 0.0
- 0.0
- 0.0
x551:
- 0.0
- 0.0
- 0.0
x5510:
- 0.0
- 0.0
- 0.0
x5511:
- 0.0
- 0.0
- 0.0
x5512:
- 0.0
- 0.0
- 0.0
x5513:
- 0.0
- 0.0
- 0.0
x5514:
- 0.0
- 0.0
- 0.0
x5515:
- 0.0
- 0.0
- 0.0
x5516:
- 0.0
- 0.0
- 0.0
x5517:
- 0.0
- 0.0
- 0.0
x5518:
- 0.0
- 0.0
- 0.0
x5519:
- 0.0
- 0.0
- 0.0
x552:
- 0.0
- 0.0
- 0.0
x5520:
- 0.0
- 0.0
- 0.0
x5521:
- 0.0
- 0.0
- 0.0
x5522:
- 0.0
- 0.0
- 0.0
x5523:
- 0.0
- 0.0
- 0.0
x5524:
- 0.0
- 0.0
- 0.0
x5525:
- 0.0
- 0.0
- 0.0
x5526:
- 0.0
- 0.0
- 0.0
x5527:
- 0.0
- 0.0
- 0.0
x5528:
- 0.0
- 0.0
- 0.0
x5529:
- 0.0
- 0.0
- 0.0
x553:
- 0.0
- 0.0
- 0.0
x5530:
- 0.0
- 0.0
- 0.0
x5531:
- 0.0
- 0.0
- 0.0
x5532:
- 0.0
- 0.0
- 0.0
x5533:
- 0.0
- 0.0
- 0.0
x5534:
- 0.0
- 0.0
- 0.0
x5535:
- 0.0
- 0.0
- 0.0
x5536:
- 0.0
- 0.0
- 0.0
x5537:
- 0.0
- 0.0
- 0.0
x5538:
- 0.0
- 0.0
- 0.0
x5539:
- 0.0
- 0.0
- 0.0
x554:
- 0.0
- 0.0
- 0.0
x5540:
- 0.0
- 0.0
- 0.0
x5541:
- 0.0
- 0.0
- 0.0
x5542:
- 0.0
- 0.0
- 0.0
x5543:
- 0.0
- 0.0
- 0.0
x5544:
- 0.0
- 0.0
- 0.0
x5545:
- 0.0
- 0.0
- 0.0
x5546:
- 0.0
- 0.0
- 0.0
x5547:
- 0.0
- 0.0
- 0.0
x5548:
- 0.0
- 0.0
- 0.0
x5549:
- 0.0
- 0.0
- 0.0
x555:
- 0.0
- 0.0
- 0.0
x5550:
- 0.0
- 0.0
- 0.0
x5551:
- 0.0
- 0.0
- 0.0
x5552:
- 0.0
- 0.0
- 0.0
x5553:
- 0.0
- 0.0
- 0.0
x5554:
- 0.0
- 0.0
- 0.0
x5555:
- 0.0
- 0.0
- 0.0
x5556:
- 0.0
- 0.0
- 0.0
x5557:
- 0.0
- 0.0
- 0.0
x5558:
- 0.0
- 0.0
- 0.0
x5559:
- 0.0
- 0.0
- 0.0
x556:
- 0.0
- 0.0
- 0.0
x5560:
- 0.0
- 0.0
- 0.0
x5561:
- 0.0
- 0.0
- 0.0
x5562:
- 0.0
- 0.0
- 0.0
x5563:
- 0.0
- 0.0
- 0.0
x5564:
- 0.0
- 0.0
- 0.0
x5565:
- 0.0
- 0.0
- 0.0
x5566:
- 0.0
- 0.0
- 0.0
x5567:
- 0.0
- 0.0
- 0.0
x5568:
- 0.0
- 0.0
- 0.0
x5569:
- 0.0
- 0.0
- 0.0
x557:
- 0.0
- 0.0
- 0.0
x5570:
- 0.0
- 0.0
- 0.0
x5571:
- 0.0
- 0.0
- 0.0
x5572:
- 0.0
- 0.0
- 0.0
x5573:
- 0.0
- 0.0
- 0.0
x5574:
- 0.0
- 0.0
- 0.0
x5575:
- 0.0
- 0.0
- 0.0
x5576:
- 0.0
- 0.0
- 0.0
x5577:
- 0.0
- 0.0
- 0.0
x5578:
- 0.0
- 0.0
- 0.0
x5579:
- 0.0
- 0.0
- 0.0
x558:
- 0.0
- 0.0
- 0.0
x5580:
- 0.0
- 0.0
- 0.0
x5581:
- 0.0
- 0.0
- 0.0
x5582:
- 0.0
- 0.0
- 0.0
x5583:
- 0.0
- 0.0
- 0.0
x5584:
- 0.0
- 0.0
- 0.0
x5585:
- 0.0
- 0.0
- 0.0
x5586:
- 0.0
- 0.0
- 0.0
x5587:
- 0.0
- 0.0
- 0.0
x5588:
- 0.0
- 0.0
- 0.0
x5589:
- 0.0
- 0.0
- 0.0
x559:
- 0.0
- 0.0
- 0.0
x5590:
- 0.0
- 0.0
- 0.0
x5591:
- 0.0
- 0.0
- 0.0
x5592:
- 0.0
- 0.0
- 0.0
x5593:
- 0.0
- 0.0
- 0.0
x5594:
- 0.0
- 0.0
- 0.0
x5595:
- 0.0
- 0.0
- 0.0
x5596:
- 0.0
- 0.0
- 0.0
x5597:
- 0.0
- 0.0
- 0.0
x5598:
- 0.0
- 0.0
- 0.0
x5599:
- 0.0
- 0.0
- 0.0
x56:
- 0.0
- 0.0
- 0.0
x560:
- 0.0
- 0.0
- 0.0
x5600:
- 0.0
- 0.0
- 0.0
x5601:
- 0.0
- 0.0
- 0.0
x5602:
- 0.0
- 0.0
- 0.0
x5603:
- 0.0
- 0.0
- 0.0
x5604:
- 0.0
- 0.0
- 0.0
x5605:
- 0.0
- 0.0
- 0.0
x5606:
- 0.0
- 0.0
- 0.0
x5607:
- 0.0
- 0.0
- 0.0
x5608:
- 0.0
- 0.0
- 0.0
x5609:
- 0.0
- 0.0
- 0.0
x561:
- 0.0
- 0.0
- 0.0
x5610:
- 0.0
- 0.0
- 0.0
x5611:
- 0.0
- 0.0
- 0.0
x5612:
- 0.0
- 0.0
- 0.0
x5613:
- 0.0
- 0.0
- 0.0
x5614:
- 0.0
- 0.0
- 0.0
x5615:
- 0.0
- 0.0
- 0.0
x5616:
- 0.0
- 0.0
- 0.0
x5617:
- 0.0
- 0.0
- 0.0
x5618:
- 0.0
- 0.0
- 0.0
x5619:
- 0.0
- 0.0
- 0.0
x562:
- 0.0
- 0.0
- 0.0
x5620:
- 0.0
- 0.0
- 0.0
x5621:
- 0.0
- 0.0
- 0.0
x5622:
- 0.0
- 0.0
- 0.0
x5623:
- 0.0
- 0.0
- 0.0
x5624:
- 0.0
- 0.0
- 0.0
x5625:
- 0.0
- 0.0
- 0.0
x5626:
- 0.0
- 0.0
- 0.0
x5627:
- 0.0
- 0.0
- 0.0
x5628:
- 0.0
- 0.0
- 0.0
x5629:
- 0.0
- 0.0
- 0.0
x563:
- 0.0
- 0.0
- 0.0
x5630:
- 0.0
- 0.0
- 0.0
x5631:
- 0.0
- 0.0
- 0.0
x5632:
- 0.0
- 0.0
- 0.0
x5633:
- 0.0
- 0.0
- 0.0
x5634:
- 0.0
- 0.0
- 0.0
x5635:
- 0.0
- 0.0
- 0.0
x5636:
- 0.0
- 0.0
- 0.0
x5637:
- 0.0
- 0.0
- 0.0
x5638:
- 0.0
- 0.0
- 0.0
x5639:
- 0.0
- 0.0
- 0.0
x564:
- 0.0
- 0.0
- 0.0
x5640:
- 0.0
- 0.0
- 0.0
x5641:
- 0.0
- 0.0
- 0.0
x5642:
- 0.0
- 0.0
- 0.0
x5643:
- 0.0
- 0.0
- 0.0
x5644:
- 0.0
- 0.0
- 0.0
x5645:
- 0.0
- 0.0
- 0.0
x5646:
- 0.0
- 0.0
- 0.0
x5647:
- 0.0
- 0.0
- 0.0
x5648:
- 0.0
- 0.0
- 0.0
x5649:
- 0.0
- 0.0
- 0.0
x565:
- 0.0
- 0.0
- 0.0
x5650:
- 0.0
- 0.0
- 0.0
x5651:
- 0.0
- 0.0
- 0.0
x5652:
- 0.0
- 0.0
- 0.0
x5653:
- 0.0
- 0.0
- 0.0
x5654:
- 0.0
- 0.0
- 0.0
x5655:
- 0.0
- 0.0
- 0.0
x5656:
- 0.0
- 0.0
- 0.0
x5657:
- 0.0
- 0.0
- 0.0
x5658:
- 0.0
- 0.0
- 0.0
x5659:
- 0.0
- 0.0
- 0.0
x566:
- 0.0
- 0.0
- 0.0
x5660:
- 0.0
- 0.0
- 0.0
x5661:
- 0.0
- 0.0
- 0.0
x5662:
- 0.0
- 0.0
- 0.0
x5663:
- 0.0
- 0.0
- 0.0
x5664:
- 0.0
- 0.0
- 0.0
x5665:
- 0.0
- 0.0
- 0.0
x5666:
- 0.0
- 0.0
- 0.0
x5667:
- 0.0
- 0.0
- 0.0
x5668:
- 0.0
- 0.0
- 0.0
x5669:
- 0.0
- 0.0
- 0.0
x567:
- 0.0
- 0.0
- 0.0
x5670:
- 0.0
- 0.0
- 0.0
x5671:
- 0.0
- 0.0
- 0.0
x5672:
- 0.0
- 0.0
- 0.0
x5673:
- 0.0
- 0.0
- 0.0
x5674:
- 0.0
- 0.0
- 0.0
x5675:
- 0.0
- 0.0
- 0.0
x5676:
- 0.0
- 0.0
- 0.0
x5677:
- 0.0
- 0.0
- 0.0
x5678:
- 0.0
- 0.0
- 0.0
x5679:
- 0.0
- 0.0
- 0.0
x568:
- 0.0
- 0.0
- 0.0
x5680:
- 0.0
- 0.0
- 0.0
x5681:
- 0.0
- 0.0
- 0.0
x5682:
- 0.0
- 0.0
- 0.0
x5683:
- 0.0
- 0.0
- 0.0
x5684:
- 0.0
- 0.0
- 0.0
x5685:
- 0.0
- 0.0
- 0.0
x5686:
- 0.0
- 0.0
- 0.0
x5687:
- 0.0
- 0.0
- 0.0
x5688:
- 0.0
- 0.0
- 0.0
x5689:
- 0.0
- 0.0
- 0.0
x569:
- 0.0
- 0.0
- 0.0
x5690:
- 0.0
- 0.0
- 0.0
x5691:
- 0.0
- 0.0
- 0.0
x5692:
- 0.0
- 0.0
- 0.0
x5693:
- 0.0
- 0.0
- 0.0
x5694:
- 0.0
- 0.0
- 0.0
x5695:
- 0.0
- 0.0
- 0.0
x5696:
- 0.0
- 0.0
- 0.0
x5697:
- 0.0
- 0.0
- 0.0
x5698:
- 0.0
- 0.0
- 0.0
x5699:
- 0.0
- 0.0
- 0.0
x57:
- 0.0
- 0.0
- 0.0
x570:
- 0.0
- 0.0
- 0.0
x5700:
- 0.0
- 0.0
- 0.0
x5701:
- 0.0
- 0.0
- 0.0
x5702:
- 0.0
- 0.0
- 0.0
x5703:
- 0.0
- 0.0
- 0.0
x5704:
- 0.0
- 0.0
- 0.0
x5705:
- 0.0
- 0.0
- 0.0
x5706:
- 0.0
- 0.0
- 0.0
x5707:
- 0.0
- 0.0
- 0.0
x5708:
- 0.0
- 0.0
- 0.0
x5709:
- 0.0
- 0.0
- 0.0
x571:
- 0.0
- 0.0
- 0.0
x5710:
- 0.0
- 0.0
- 0.0
x5711:
- 0.0
- 0.0
- 0.0
x5712:
- 0.0
- 0.0
- 0.0
x5713:
- 0.0
- 0.0
- 0.0
x5714:
- 0.0
- 0.0
- 0.0
x5715:
- 0.0
- 0.0
- 0.0
x5716:
- 0.0
- 0.0
- 0.0
x5717:
- 0.0
- 0.0
- 0.0
x5718:
- 0.0
- 0.0
- 0.0
x5719:
- 0.0
- 0.0
- 0.0
x572:
- 0.0
- 0.0
- 0.0
x5720:
- 0.0
- 0.0
- 0.0
x5721:
- 0.0
- 0.0
- 0.0
x5722:
- 0.0
- 0.0
- 0.0
x5723:
- 0.0
- 0.0
- 0.0
x5724:
- 0.0
- 0.0
- 0.0
x5725:
- 0.0
- 0.0
- 0.0
x5726:
- 0.0
- 0.0
- 0.0
x5727:
- 0.0
- 0.0
- 0.0
x5728:
- 0.0
- 0.0
- 0.0
x5729:
- 0.0
- 0.0
- 0.0
x573:
- 0.0
- 0.0
- 0.0
x5730:
- 0.0
- 0.0
- 0.0
x5731:
- 0.0
- 0.0
- 0.0
x5732:
- 0.0
- 0.0
- 0.0
x5733:
- 0.0
- 0.0
- 0.0
x5734:
- 0.0
- 0.0
- 0.0
x5735:
- 0.0
- 0.0
- 0.0
x5736:
- 0.0
- 0.0
- 0.0
x5737:
- 0.0
- 0.0
- 0.0
x5738:
- 0.0
- 0.0
- 0.0
x5739:
- 0.0
- 0.0
- 0.0
x574:
- 0.0
- 0.0
- 0.0
x5740:
- 0.0
- 0.0
- 0.0
x5741:
- 0.0
- 0.0
- 0.0
x5742:
- 0.0
- 0.0
- 0.0
x5743:
- 0.0
- 0.0
- 0.0
x5744:
- 0.0
- 0.0
- 0.0
x5745:
- 0.0
- 0.0
- 0.0
x5746:
- 0.0
- 0.0
- 0.0
x5747:
- 0.0
- 0.0
- 0.0
x5748:
- 0.0
- 0.0
- 0.0
x5749:
- 0.0
- 0.0
- 0.0
x575:
- 0.0
- 0.0
- 0.0
x5750:
- 0.0
- 0.0
- 0.0
x5751:
- 0.0
- 0.0
- 0.0
x5752:
- 0.0
- 0.0
- 0.0
x5753:
- 0.0
- 0.0
- 0.0
x5754:
- 0.0
- 0.0
- 0.0
x5755:
- 0.0
- 0.0
- 0.0
x5756:
- 0.0
- 0.0
- 0.0
x5757:
- 0.0
- 0.0
- 0.0
x5758:
- 0.0
- 0.0
- 0.0
x5759:
- 0.0
- 0.0
- 0.0
x576:
- 0.0
- 0.0
- 0.0
x5760:
- 0.0
- 0.0
- 0.0
x5761:
- 0.0
- 0.0
- 0.0
x5762:
- 0.0
- 0.0
- 0.0
x5763:
- 0.0
- 0.0
- 0.0
x5764:
- 0.0
- 0.0
- 0.0
x5765:
- 0.0
- 0.0
- 0.0
x5766:
- 0.0
- 0.0
- 0.0
x5767:
- 0.0
- 0.0
- 0.0
x5768:
- 0.0
- 0.0
- 0.0
x5769:
- 0.0
- 0.0
- 0.0
x577:
- 0.0
- 0.0
- 0.0
x5770:
- 0.0
- 0.0
- 0.0
x5771:
- 0.0
- 0.0
- 0.0
x5772:
- 0.0
- 0.0
- 0.0
x5773:
- 0.0
- 0.0
- 0.0
x5774:
- 0.0
- 0.0
- 0.0
x5775:
- 0.0
- 0.0
- 0.0
x5776:
- 0.0
- 0.0
- 0.0
x5777:
- 0.0
- 0.0
- 0.0
x5778:
- 0.0
- 0.0
- 0.0
x5779:
- 0.0
- 0.0
- 0.0
x578:
- 0.0
- 0.0
- 0.0
x5780:
- 0.0
- 0.0
- 0.0
x5781:
- 0.0
- 0.0
- 0.0
x5782:
- 0.0
- 0.0
- 0.0
x5783:
- 0.0
- 0.0
- 0.0
x5784:
- 0.0
- 0.0
- 0.0
x5785:
- 0.0
- 0.0
- 0.0
x5786:
- 0.0
- 0.0
- 0.0
x5787:
- 0.0
- 0.0
- 0.0
x5788:
- 0.0
- 0.0
- 0.0
x5789:
- 0.0
- 0.0
- 0.0
x579:
- 0.0
- 0.0
- 0.0
x5790:
- 0.0
- 0.0
- 0.0
x5791:
- 0.0
- 0.0
- 0.0
x5792:
- 0.0
- 0.0
- 0.0
x5793:
- 0.0
- 0.0
- 0.0
x5794:
- 0.0
- 0.0
- 0.0
x5795:
- 0.0
- 0.0
- 0.0
x5796:
- 0.0
- 0.0
- 0.0
x5797:
- 0.0
- 0.0
- 0.0
x5798:
- 0.0
- 0.0
- 0.0
x5799:
- 0.0
- 0.0
- 0.0
x58:
- 0.0
- 0.0
- 0.0
x580:
- 0.0
- 0.0
- 0.0
x5800:
- 0.0
- 0.0
- 0.0
x5801:
- 0.0
- 0.0
- 0.0
x5802:
- 0.0
- 0.0
- 0.0
x5803:
- 0.0
- 0.0
- 0.0
x5804:
- 0.0
- 0.0
- 0.0
x5805:
- 0.0
- 0.0
- 0.0
x5806:
- 0.0
- 0.0
- 0.0
x5807:
- 0.0
- 0.0
- 0.0
x5808:
- 0.0
- 0.0
- 0.0
x5809:
- 0.0
- 0.0
- 0.0
x581:
- 0.0
- 0.0
- 0.0
x5810:
- 0.0
- 0.0
- 0.0
x5811:
- 0.0
- 0.0
- 0.0
x5812:
- 0.0
- 0.0
- 0.0
x5813:
- 0.0
- 0.0
- 0.0
x5814:
- 0.0
- 0.0
- 0.0
x5815:
- 0.0
- 0.0
- 0.0
x5816:
- 0.0
- 0.0
- 0.0
x5817:
- 0.0
- 0.0
- 0.0
x5818:
- 0.0
- 0.0
- 0.0
x5819:
- 0.0
- 0.0
- 0.0
x582:
- 0.0
- 0.0
- 0.0
x5820:
- 0.0
- 0.0
- 0.0
x5821:
- 0.0
- 0.0
- 0.0
x5822:
- 0.0
- 0.0
- 0.0
x5823:
- 0.0
- 0.0
- 0.0
x5824:
- 0.0
- 0.0
- 0.0
x5825:
- 0.0
- 0.0
- 0.0
x5826:
- 0.0
- 0.0
- 0.0
x5827:
- 0.0
- 0.0
- 0.0
x5828:
- 0.0
- 0.0
- 0.0
x5829:
- 0.0
- 0.0
- 0.0
x583:
- 0.0
- 0.0
- 0.0
x5830:
- 0.0
- 0.0
- 0.0
x5831:
- 0.0
- 0.0
- 0.0
x5832:
- 0.0
- 0.0
- 0.0
x5833:
- 0.0
- 0.0
- 0.0
x5834:
- 0.0
- 0.0
- 0.0
x5835:
- 0.0
- 0.0
- 0.0
x5836:
- 0.0
- 0.0
- 0.0
x5837:
- 0.0
- 0.0
- 0.0
x5838:
- 0.0
- 0.0
- 0.0
x5839:
- 0.0
- 0.0
- 0.0
x584:
- 0.0
- 0.0
- 0.0
x5840:
- 0.0
- 0.0
- 0.0
x5841:
- 0.0
- 0.0
- 0.0
x5842:
- 0.0
- 0.0
- 0.0
x5843:
- 0.0
- 0.0
- 0.0
x5844:
- 0.0
- 0.0
- 0.0
x5845:
- 0.0
- 0.0
- 0.0
x5846:
- 0.0
- 0.0
- 0.0
x5847:
- 0.0
- 0.0
- 0.0
x5848:
- 0.0
- 0.0
- 0.0
x5849:
- 0.0
- 0.0
- 0.0
x585:
- 0.0
- 0.0
- 0.0
x5850:
- 0.0
- 0.0
- 0.0
x5851:
- 0.0
- 0.0
- 0.0
x5852:
- 0.0
- 0.0
- 0.0
x5853:
- 0.0
- 0.0
- 0.0
x5854:
- 0.0
- 0.0
- 0.0
x5855:
- 0.0
- 0.0
- 0.0
x5856:
- 0.0
- 0.0
- 0.0
x5857:
- 0.0
- 0.0
- 0.0
x5858:
- 0.0
- 0.0
- 0.0
x5859:
- 0.0
- 0.0
- 0.0
x586:
- 0.0
- 0.0
- 0.0
x5860:
- 0.0
- 0.0
- 0.0
x5861:
- 0.0
- 0.0
- 0.0
x5862:
- 0.0
- 0.0
- 0.0
x5863:
- 0.0
- 0.0
- 0.0
x5864:
- 0.0
- 0.0
- 0.0
x5865:
- 0.0
- 0.0
- 0.0
x5866:
- 0.0
- 0.0
- 0.0
x5867:
- 0.0
- 0.0
- 0.0
x5868:
- 0.0
- 0.0
- 0.0
x5869:
- 0.0
- 0.0
- 0.0
x587:
- 0.0
- 0.0
- 0.0
x5870:
- 0.0
- 0.0
- 0.0
x5871:
- 0.0
- 0.0
- 0.0
x5872:
- 0.0
- 0.0
- 0.0
x5873:
- 0.0
- 0.0
- 0.0
x5874:
- 0.0
- 0.0
- 0.0
x5875:
- 0.0
- 0.0
- 0.0
x5876:
- 0.0
- 0.0
- 0.0
x5877:
- 0.0
- 0.0
- 0.0
x5878:
- 0.0
- 0.0
- 0.0
x5879:
- 0.0
- 0.0
- 0.0
x588:
- 0.0
- 0.0
- 0.0
x5880:
- 0.0
- 0.0
- 0.0
x5881:
- 0.0
- 0.0
- 0.0
x5882:
- 0.0
- 0.0
- 0.0
x5883:
- 0.0
- 0.0
- 0.0
x5884:
- 0.0
- 0.0
- 0.0
x5885:
- 0.0
- 0.0
- 0.0
x5886:
- 0.0
- 0.0
- 0.0
x5887:
- 0.0
- 0.0
- 0.0
x5888:
- 0.0
- 0.0
- 0.0
x5889:
- 0.0
- 0.0
- 0.0
x589:
- 0.0
- 0.0
- 0.0
x5890:
- 0.0
- 0.0
- 0.0
x5891:
- 0.0
- 0.0
- 0.0
x5892:
- 0.0
- 0.0
- 0.0
x5893:
- 0.0
- 0.0
- 0.0
x5894:
- 0.0
- 0.0
- 0.0
x5895:
- 0.0
- 0.0
- 0.0
x5896:
- 0.0
- 0.0
- 0.0
x5897:
- 0.0
- 0.0
- 0.0
x5898:
- 0.0
- 0.0
- 0.0
x5899:
- 0.0
- 0.0
- 0.0
x59:
- 0.0
- 0.0
- 0.0
x590:
- 0.0
- 0.0
- 0.0
x5900:
- 0.0
- 0.0
- 0.0
x5901:
- 0.0
- 0.0
- 0.0
x5902:
- 0.0
- 0.0
- 0.0
x5903:
- 0.0
- 0.0
- 0.0
x5904:
- 0.0
- 0.0
- 0.0
x5905:
- 0.0
- 0.0
- 0.0
x5906:
- 0.0
- 0.0
- 0.0
x5907:
- 0.0
- 0.0
- 0.0
x5908:
- 0.0
- 0.0
- 0.0
x5909:
- 0.0
- 0.0
- 0.0
x591:
- 0.0
- 0.0
- 0.0
x5910:
- 0.0
- 0.0
- 0.0
x5911:
- 0.0
- 0.0
- 0.0
x5912:
- 0.0
- 0.0
- 0.0
x5913:
- 0.0
- 0.0
- 0.0
x5914:
- 0.0
- 0.0
- 0.0
x5915:
- 0.0
- 0.0
- 0.0
x5916:
- 0.0
- 0.0
- 0.0
x5917:
- 0.0
- 0.0
- 0.0
x5918:
- 0.0
- 0.0
- 0.0
x5919:
- 0.0
- 0.0
- 0.0
x592:
- 0.0
- 0.0
- 0.0
x5920:
- 0.0
- 0.0
- 0.0
x5921:
- 0.0
- 0.0
- 0.0
x5922:
- 0.0
- 0.0
- 0.0
x5923:
- 0.0
- 0.0
- 0.0
x5924:
- 0.0
- 0.0
- 0.0
x5925:
- 0.0
- 0.0
- 0.0
x5926:
- 0.0
- 0.0
- 0.0
x5927:
- 0.0
- 0.0
- 0.0
x5928:
- 0.0
- 0.0
- 0.0
x5929:
- 0.0
- 0.0
- 0.0
x593:
- 0.0
- 0.0
- 0.0
x5930:
- 0.0
- 0.0
- 0.0
x5931:
- 0.0
- 0.0
- 0.0
x5932:
- 0.0
- 0.0
- 0.0
x5933:
- 0.0
- 0.0
- 0.0
x5934:
- 0.0
- 0.0
- 0.0
x5935:
- 0.0
- 0.0
- 0.0
x5936:
- 0.0
- 0.0
- 0.0
x5937:
- 0.0
- 0.0
- 0.0
x5938:
- 0.0
- 0.0
- 0.0
x5939:
- 0.0
- 0.0
- 0.0
x594:
- 0.0
- 0.0
- 0.0
x5940:
- 0.0
- 0.0
- 0.0
x5941:
- 0.0
- 0.0
- 0.0
x5942:
- 0.0
- 0.0
- 0.0
x5943:
- 0.0
- 0.0
- 0.0
x5944:
- 0.0
- 0.0
- 0.0
x5945:
- 0.0
- 0.0
- 0.0
x5946:
- 0.0
- 0.0
- 0.0
x5947:
- 0.0
- 0.0
- 0.0
x5948:
- 0.0
- 0.0
- 0.0
x5949:
- 0.0
- 0.0
- 0.0
x595:
- 0.0
- 0.0
- 0.0
x5950:
- 0.0
- 0.0
- 0.0
x5951:
- 0.0
- 0.0
- 0.0
x5952:
- 0.0
- 0.0
- 0.0
x5953:
- 0.0
- 0.0
- 0.0
x5954:
- 0.0
- 0.0
- 0.0
x5955:
- 0.0
- 0.0
- 0.0
x5956:
- 0.0
- 0.0
- 0.0
x5957:
- 0.0
- 0.0
- 0.0
x5958:
- 0.0
- 0.0
- 0.0
x5959:
- 0.0
- 0.0
- 0.0
x596:
- 0.0
- 0.0
- 0.0
x5960:
- 0.0
- 0.0
- 0.0
x5961:
- 0.0
- 0.0
- 0.0
x5962:
- 0.0
- 0.0
- 0.0
x5963:
- 0.0
- 0.0
- 0.0
x5964:
- 0.0
- 0.0
- 0.0
x5965:
- 0.0
- 0.0
- 0.0
x5966:
- 0.0
- 0.0
- 0.0
x5967:
- 0.0
- 0.0
- 0.0
x5968:
- 0.0
- 0.0
- 0.0
x5969:
- 0.0
- 0.0
- 0.0
x597:
- 0.0
- 0.0
- 0.0
x5970:
- 0.0
- 0.0
- 0.0
x5971:
- 0.0
- 0.0
- 0.0
x5972:
- 0.0
- 0.0
- 0.0
x5973:
- 0.0
- 0.0
- 0.0
x5974:
- 0.0
- 0.0
- 0.0
x5975:
- 0.0
- 0.0
- 0.0
x5976:
- 0.0
- 0.0
- 0.0
x5977:
- 0.0
- 0.0
- 0.0
x5978:
- 0.0
- 0.0
- 0.0
x5979:
- 0.0
- 0.0
- 0.0
x598:
- 0.0
- 0.0
- 0.0
x5980:
- 0.0
- 0.0
- 0.0
x5981:
- 0.0
- 0.0
- 0.0
x5982:
- 0.0
- 0.0
- 0.0
x5983:
- 0.0
- 0.0
- 0.0
x5984:
- 0.0
- 0.0
- 0.0
x5985:
- 0.0
- 0.0
- 0.0
x5986:
- 0.0
- 0.0
- 0.0
x5987:
- 0.0
- 0.0
- 0.0
x5988:
- 0.0
- 0.0
- 0.0
x5989:
- 0.0
- 0.0
- 0.0
x599:
- 0.0
- 0.0
- 0.0
x5990:
- 0.0
- 0.0
- 0.0
x5991:
- 0.0
- 0.0
- 0.0
x5992:
- 0.0
- 0.0
- 0.0
x5993:
- 0.0
- 0.0
- 0.0
x5994:
- 0.0
- 0.0
- 0.0
x5995:
- 0.0
- 0.0
- 0.0
x5996:
- 0.0
- 0.0
- 0.0
x5997:
- 0.0
- 0.0
- 0.0
x5998:
- 0.0
- 0.0
- 0.0
x5999:
- 0.0
- 0.0
- 0.0
x6:
- 0.0
- 0.0
- 0.0
x60:
- 0.0
- 0.0
- 0.0
x600:
- 0.0
- 0.0
- 0.0
x6000:
- 0.0
- 0.0
- 0.0
x6001:
- 0.0
- 0.0
- 0.0
x6002:
- 0.0
- 0.0
- 0.0
x6003:
- 0.0
- 0.0
- 0.0
x6004:
- 0.0
- 0.0
- 0.0
x6005:
- 0.0
- 0.0
- 0.0
x6006:
- 0.0
- 0.0
- 0.0
x6007:
- 0.0
- 0.0
- 0.0
x6008:
- 0.0
- 0.0
- 0.0
x6009:
- 0.0
- 0.0
- 0.0
x601:
- 0.0
- 0.0
- 0.0
x6010:
- 0.0
- 0.0
- 0.0
x6011:
- 0.0
- 0.0
- 0.0
x6012:
- 0.0
- 0.0
- 0.0
x6013:
- 0.0
- 0.0
- 0.0
x6014:
- 0.0
- 0.0
- 0.0
x6015:
- 0.0
- 0.0
- 0.0
x6016:
- 0.0
- 0.0
- 0.0
x6017:
- 0.0
- 0.0
- 0.0
x6018:
- 0.0
- 0.0
- 0.0
x6019:
- 0.0
- 0.0
- 0.0
x602:
- 0.0
- 0.0
- 0.0
x6020:
- 0.0
- 0.0
- 0.0
x6021:
- 0.0
- 0.0
- 0.0
x6022:
- 0.0
- 0.0
- 0.0
x6023:
- 0.0
- 0.0
- 0.0
x6024:
- 0.0
- 0.0
- 0.0
x6025:
- 0.0
- 0.0
- 0.0
x6026:
- 0.0
- 0.0
- 0.0
x6027:
- 0.0
- 0.0
- 0.0
x6028:
- 0.0
- 0.0
- 0.0
x6029:
- 0.0
- 0.0
- 0.0
x603:
- 0.0
- 0.0
- 0.0
x6030:
- 0.0
- 0.0
- 0.0
x6031:
- 0.0
- 0.0
- 0.0
x6032:
- 0.0
- 0.0
- 0.0
x6033:
- 0.0
- 0.0
- 0.0
x6034:
- 0.0
- 0.0
- 0.0
x6035:
- 0.0
- 0.0
- 0.0
x6036:
- 0.0
- 0.0
- 0.0
x6037:
- 0.0
- 0.0
- 0.0
x6038:
- 0.0
- 0.0
- 0.0
x6039:
- 0.0
- 0.0
- 0.0
x604:
- 0.0
- 0.0
- 0.0
x6040:
- 0.0
- 0.0
- 0.0
x6041:
- 0.0
- 0.0
- 0.0
x6042:
- 0.0
- 0.0
- 0.0
x6043:
- 0.0
- 0.0
- 0.0
x6044:
- 0.0
- 0.0
- 0.0
x6045:
- 0.0
- 0.0
- 0.0
x6046:
- 0.0
- 0.0
- 0.0
x6047:
- 0.0
- 0.0
- 0.0
x6048:
- 0.0
- 0.0
- 0.0
x6049:
- 0.0
- 0.0
- 0.0
x605:
- 0.0
- 0.0
- 0.0
x6050:
- 0.0
- 0.0
- 0.0
x6051:
- 0.0
- 0.0
- 0.0
x6052:
- 0.0
- 0.0
- 0.0
x6053:
- 0.0
- 0.0
- 0.0
x6054:
- 0.0
- 0.0
- 0.0
x6055:
- 0.0
- 0.0
- 0.0
x6056:
- 0.0
- 0.0
- 0.0
x6057:
- 0.0
- 0.0
- 0.0
x6058:
- 0.0
- 0.0
- 0.0
x6059:
- 0.0
- 0.0
- 0.0
x606:
- 0.0
- 0.0
- 0.0
x6060:
- 0.0
- 0.0
- 0.0
x6061:
- 0.0
- 0.0
- 0.0
x6062:
- 0.0
- 0.0
- 0.0
x6063:
- 0.0
- 0.0
- 0.0
x6064:
- 0.0
- 0.0
- 0.0
x6065:
- 0.0
- 0.0
- 0.0
x6066:
- 0.0
- 0.0
- 0.0
x6067:
- 0.0
- 0.0
- 0.0
x6068:
- 0.0
- 0.0
- 0.0
x6069:
- 0.0
- 0.0
- 0.0
x607:
- 0.0
- 0.0
- 0.0
x6070:
- 0.0
- 0.0
- 0.0
x6071:
- 0.0
- 0.0
- 0.0
x6072:
- 0.0
- 0.0
- 0.0
x6073:
- 0.0
- 0.0
- 0.0
x6074:
- 0.0
- 0.0
- 0.0
x6075:
- 0.0
- 0.0
- 0.0
x6076:
- 0.0
- 0.0
- 0.0
x6077:
- 0.0
- 0.0
- 0.0
x6078:
- 0.0
- 0.0
- 0.0
x6079:
- 0.0
- 0.0
- 0.0
x608:
- 0.0
- 0.0
- 0.0
x6080:
- 0.0
- 0.0
- 0.0
x6081:
- 0.0
- 0.0
- 0.0
x6082:
- 0.0
- 0.0
- 0.0
x6083:
- 0.0
- 0.0
- 0.0
x6084:
- 0.0
- 0.0
- 0.0
x6085:
- 0.0
- 0.0
- 0.0
x6086:
- 0.0
- 0.0
- 0.0
x6087:
- 0.0
- 0.0
- 0.0
x6088:
- 0.0
- 0.0
- 0.0
x6089:
- 0.0
- 0.0
- 0.0
x609:
- 0.0
- 0.0
- 0.0
x6090:
- 0.0
- 0.0
- 0.0
x6091:
- 0.0
- 0.0
- 0.0
x6092:
- 0.0
- 0.0
- 0.0
x6093:
- 0.0
- 0.0
- 0.0
x6094:
- 0.0
- 0.0
- 0.0
x6095:
- 0.0
- 0.0
- 0.0
x6096:
- 0.0
- 0.0
- 0.0
x6097:
- 0.0
- 0.0
- 0.0
x6098:
- 0.0
- 0.0
- 0.0
x6099:
- 0.0
- 0.0
- 0.0
x61:
- 0.0
- 0.0
- 0.0
x610:
- 0.0
- 0.0
- 0.0
x6100:
- 0.0
- 0.0
- 0.0
x6101:
- 0.0
- 0.0
- 0.0
x6102:
- 0.0
- 0.0
- 0.0
x6103:
- 0.0
- 0.0
- 0.0
x6104:
- 0.0
- 0.0
- 0.0
x6105:
- 0.0
- 0.0
- 0.0
x6106:
- 0.0
- 0.0
- 0.0
x6107:
- 0.0
- 0.0
- 0.0
x6108:
- 0.0
- 0.0
- 0.0
x6109:
- 0.0
- 0.0
- 0.0
x611:
- 0.0
- 0.0
- 0.0
x6110:
- 0.0
- 0.0
- 0.0
x6111:
- 0.0
- 0.0
- 0.0
x6112:
- 0.0
- 0.0
- 0.0
x6113:
- 0.0
- 0.0
- 0.0
x6114:
- 0.0
- 0.0
- 0.0
x6115:
- 0.0
- 0.0
- 0.0
x6116:
- 0.0
- 0.0
- 0.0
x6117:
- 0.0
- 0.0
- 0.0
x6118:
- 0.0
- 0.0
- 0.0
x6119:
- 0.0
- 0.0
- 0.0
x612:
- 0.0
- 0.0
- 0.0
x6120:
- 0.0
- 0.0
- 0.0
x6121:
- 0.0
- 0.0
- 0.0
x6122:
- 0.0
- 0.0
- 0.0
x6123:
- 0.0
- 0.0
- 0.0
x6124:
- 0.0
- 0.0
- 0.0
x6125:
- 0.0
- 0.0
- 0.0
x6126:
- 0.0
- 0.0
- 0.0
x6127:
- 0.0
- 0.0
- 0.0
x6128:
- 0.0
- 0.0
- 0.0
x6129:
- 0.0
- 0.0
- 0.0
x613:
- 0.0
- 0.0
- 0.0
x6130:
- 0.0
- 0.0
- 0.0
x6131:
- 0.0
- 0.0
- 0.0
x6132:
- 0.0
- 0.0
- 0.0
x6133:
- 0.0
- 0.0
- 0.0
x6134:
- 0.0
- 0.0
- 0.0
x6135:
- 0.0
- 0.0
- 0.0
x6136:
- 0.0
- 0.0
- 0.0
x6137:
- 0.0
- 0.0
- 0.0
x6138:
- 0.0
- 0.0
- 0.0
x6139:
- 0.0
- 0.0
- 0.0
x614:
- 0.0
- 0.0
- 0.0
x6140:
- 0.0
- 0.0
- 0.0
x6141:
- 0.0
- 0.0
- 0.0
x6142:
- 0.0
- 0.0
- 0.0
x6143:
- 0.0
- 0.0
- 0.0
x6144:
- 0.0
- 0.0
- 0.0
x6145:
- 0.0
- 0.0
- 0.0
x6146:
- 0.0
- 0.0
- 0.0
x6147:
- 0.0
- 0.0
- 0.0
x6148:
- 0.0
- 0.0
- 0.0
x6149:
- 0.0
- 0.0
- 0.0
x615:
- 0.0
- 0.0
- 0.0
x6150:
- 0.0
- 0.0
- 0.0
x6151:
- 0.0
- 0.0
- 0.0
x6152:
- 0.0
- 0.0
- 0.0
x6153:
- 0.0
- 0.0
- 0.0
x6154:
- 0.0
- 0.0
- 0.0
x6155:
- 0.0
- 0.0
- 0.0
x6156:
- 0.0
- 0.0
- 0.0
x6157:
- 0.0
- 0.0
- 0.0
x6158:
- 0.0
- 0.0
- 0.0
x6159:
- 0.0
- 0.0
- 0.0
x616:
- 0.0
- 0.0
- 0.0
x6160:
- 0.0
- 0.0
- 0.0
x6161:
- 0.0
- 0.0
- 0.0
x6162:
- 0.0
- 0.0
- 0.0
x6163:
- 0.0
- 0.0
- 0.0
x6164:
- 0.0
- 0.0
- 0.0
x6165:
- 0.0
- 0.0
- 0.0
x6166:
- 0.0
- 0.0
- 0.0
x6167:
- 0.0
- 0.0
- 0.0
x6168:
- 0.0
- 0.0
- 0.0
x6169:
- 0.0
- 0.0
- 0.0
x617:
- 0.0
- 0.0
- 0.0
x6170:
- 0.0
- 0.0
- 0.0
x6171:
- 0.0
- 0.0
- 0.0
x6172:
- 0.0
- 0.0
- 0.0
x6173:
- 0.0
- 0.0
- 0.0
x6174:
- 0.0
- 0.0
- 0.0
x6175:
- 0.0
- 0.0
- 0.0
x6176:
- 0.0
- 0.0
- 0.0
x6177:
- 0.0
- 0.0
- 0.0
x6178:
- 0.0
- 0.0
- 0.0
x6179:
- 0.0
- 0.0
- 0.0
x618:
- 0.0
- 0.0
- 0.0
x6180:
- 0.0
- 0.0
- 0.0
x6181:
- 0.0
- 0.0
- 0.0
x6182:
- 0.0
- 0.0
- 0.0
x6183:
- 0.0
- 0.0
- 0.0
x6184:
- 0.0
- 0.0
- 0.0
x6185:
- 0.0
- 0.0
- 0.0
x6186:
- 0.0
- 0.0
- 0.0
x619:
- 0.0
- 0.0
- 0.0
x62:
- 0.0
- 0.0
- 0.0
x620:
- 0.0
- 0.0
- 0.0
x621:
- 0.0
- 0.0
- 0.0
x622:
- 0.0
- 0.0
- 0.0
x623:
- 0.0
- 0.0
- 0.0
x624:
- 0.0
- 0.0
- 0.0
x625:
- 0.0
- 0.0
- 0.0
x626:
- 0.0
- 0.0
- 0.0
x627:
- 0.0
- 0.0
- 0.0
x628:
- 0.0
- 0.0
- 0.0
x629:
- 0.0
- 0.0
- 0.0
x63:
- 0.0
- 0.0
- 0.0
x630:
- 0.0
- 0.0
- 0.0
x631:
- 0.0
- 0.0
- 0.0
x632:
- 0.0
- 0.0
- 0.0
x633:
- 0.0
- 0.0
- 0.0
x634:
- 0.0
- 0.0
- 0.0
x635:
- 0.0
- 0.0
- 0.0
x636:
- 0.0
- 0.0
- 0.0
x637:
- 0.0
- 0.0
- 0.0
x638:
- 0.0
- 0.0
- 0.0
x639:
- 0.0
- 0.0
- 0.0
x64:
- 0.0
- 0.0
- 0.0
x640:
- 0.0
- 0.0
- 0.0
x641:
- 0.0
- 0.0
- 0.0
x642:
- 0.0
- 0.0
- 0.0
x643:
- 0.0
- 0.0
- 0.0
x644:
- 0.0
- 0.0
- 0.0
x645:
- 0.0
- 0.0
- 0.0
x646:
- 0.0
- 0.0
- 0.0
x647:
- 0.0
- 0.0
- 0.0
x648:
- 0.0
- 0.0
- 0.0
x649:
- 0.0
- 0.0
- 0.0
x65:
- 0.0
- 0.0
- 0.0
x650:
- 0.0
- 0.0
- 0.0
x651:
- 0.0
- 0.0
- 0.0
x652:
- 0.0
- 0.0
- 0.0
x653:
- 0.0
- 0.0
- 0.0
x654:
- 0.0
- 0.0
- 0.0
x655:
- 0.0
- 0.0
- 0.0
x656:
- 0.0
- 0.0
- 0.0
x657:
- 0.0
- 0.0
- 0.0
x658:
- 0.0
- 0.0
- 0.0
x659:
- 0.0
- 0.0
- 0.0
x66:
- 0.0
- 0.0
- 0.0
x660:
- 0.0
- 0.0
- 0.0
x661:
- 0.0
- 0.0
- 0.0
x662:
- 0.0
- 0.0
- 0.0
x663:
- 0.0
- 0.0
- 0.0
x664:
- 0.0
- 0.0
- 0.0
x665:
- 0.0
- 0.0
- 0.0
x666:
- 0.0
- 0.0
- 0.0
x667:
- 0.0
- 0.0
- 0.0
x668:
- 0.0
- 0.0
- 0.0
x669:
- 0.0
- 0.0
- 0.0
x67:
- 0.0
- 0.0
- 0.0
x670:
- 0.0
- 0.0
- 0.0
x671:
- 0.0
- 0.0
- 0.0
x672:
- 0.0
- 0.0
- 0.0
x673:
- 0.0
- 0.0
- 0.0
x674:
- 0.0
- 0.0
- 0.0
x675:
- 0.0
- 0.0
- 0.0
x676:
- 0.0
- 0.0
- 0.0
x677:
- 0.0
- 0.0
- 0.0
x678:
- 0.0
- 0.0
- 0.0
x679:
- 0.0
- 0.0
- 0.0
x68:
- 0.0
- 0.0
- 0.0
x680:
- 0.0
- 0.0
- 0.0
x681:
- 0.0
- 0.0
- 0.0
x682:
- 0.0
- 0.0
- 0.0
x683:
- 0.0
- 0.0
- 0.0
x684:
- 0.0
- 0.0
- 0.0
x685:
- 0.0
- 0.0
- 0.0
x686:
- 0.0
- 0.0
- 0.0
x687:
- 0.0
- 0.0
- 0.0
x688:
- 0.0
- 0.0
- 0.0
x689:
- 0.0
- 0.0
- 0.0
x69:
- 0.0
- 0.0
- 0.0
x690:
- 0.0
- 0.0
- 0.0
x691:
- 0.0
- 0.0
- 0.0
x692:
- 0.0
- 0.0
- 0.0
x693:
- 0.0
- 0.0
- 0.0
x694:
- 0.0
- 0.0
- 0.0
x695:
- 0.0
- 0.0
- 0.0
x696:
- 0.0
- 0.0
- 0.0
x697:
- 0.0
- 0.0
- 0.0
x698:
- 0.0
- 0.0
- 0.0
x699:
- 0.0
- 0.0
- 0.0
x7:
- 0.0
- 0.0
- 0.0
x70:
- 0.0
- 0.0
- 0.0
x700:
- 0.0
- 0.0
- 0.0
x701:
- 0.0
- 0.0
- 0.0
x702:
- 0.0
- 0.0
- 0.0
x703:
- 0.0
- 0.0
- 0.0
x704:
- 0.0
- 0.0
- 0.0
x705:
- 0.0
- 0.0
- 0.0
x706:
- 0.0
- 0.0
- 0.0
x707:
- 0.0
- 0.0
- 0.0
x708:
- 0.0
- 0.0
- 0.0
x709:
- 0.0
- 0.0
- 0.0
x71:
- 0.0
- 0.0
- 0.0
x710:
- 0.0
- 0.0
- 0.0
x711:
- 0.0
- 0.0
- 0.0
x712:
- 0.0
- 0.0
- 0.0
x713:
- 0.0
- 0.0
- 0.0
x714:
- 0.0
- 0.0
- 0.0
x715:
- 0.0
- 0.0
- 0.0
x716:
- 0.0
- 0.0
- 0.0
x717:
- 0.0
- 0.0
- 0.0
x718:
- 0.0
- 0.0
- 0.0
x719:
- 0.0
- 0.0
- 0.0
x72:
- 0.0
- 0.0
- 0.0
x720:
- 0.0
- 0.0
- 0.0
x721:
- 0.0
- 0.0
- 0.0
x722:
- 0.0
- 0.0
- 0.0
x723:
- 0.0
- 0.0
- 0.0
x724:
- 0.0
- 0.0
- 0.0
x725:
- 0.0
- 0.0
- 0.0
x726:
- 0.0
- 0.0
- 0.0
x727:
- 0.0
- 0.0
- 0.0
x728:
- 0.0
- 0.0
- 0.0
x729:
- 0.0
- 0.0
- 0.0
x73:
- 0.0
- 0.0
- 0.0
x730:
- 0.0
- 0.0
- 0.0
x731:
- 0.0
- 0.0
- 0.0
x732:
- 0.0
- 0.0
- 0.0
x733:
- 0.0
- 0.0
- 0.0
x734:
- 0.0
- 0.0
- 0.0
x735:
- 0.0
- 0.0
- 0.0
x736:
- 0.0
- 0.0
- 0.0
x737:
- 0.0
- 0.0
- 0.0
x738:
- 0.0
- 0.0
- 0.0
x739:
- 0.0
- 0.0
- 0.0
x74:
- 0.0
- 0.0
- 0.0
x740:
- 0.0
- 0.0
- 0.0
x741:
- 0.0
- 0.0
- 0.0
x742:
- 0.0
- 0.0
- 0.0
x743:
- 0.0
- 0.0
- 0.0
x744:
- 0.0
- 0.0
- 0.0
x745:
- 0.0
- 0.0
- 0.0
x746:
- 0.0
- 0.0
- 0.0
x747:
- 0.0
- 0.0
- 0.0
x748:
- 0.0
- 0.0
- 0.0
x749:
- 0.0
- 0.0
- 0.0
x75:
- 0.0
- 0.0
- 0.0
x750:
- 0.0
- 0.0
- 0.0
x751:
- 0.0
- 0.0
- 0.0
x752:
- 0.0
- 0.0
- 0.0
x753:
- 0.0
- 0.0
- 0.0
x754:
- 0.0
- 0.0
- 0.0
x755:
- 0.0
- 0.0
- 0.0
x756:
- 0.0
- 0.0
- 0.0
x757:
- 0.0
- 0.0
- 0.0
x758:
- 0.0
- 0.0
- 0.0
x759:
- 0.0
- 0.0
- 0.0
x76:
- 0.0
- 0.0
- 0.0
x760:
- 0.0
- 0.0
- 0.0
x761:
- 0.0
- 0.0
- 0.0
x762:
- 0.0
- 0.0
- 0.0
x763:
- 0.0
- 0.0
- 0.0
x764:
- 0.0
- 0.0
- 0.0
x765:
- 0.0
- 0.0
- 0.0
x766:
- 0.0
- 0.0
- 0.0
x767:
- 0.0
- 0.0
- 0.0
x768:
- 0.0
- 0.0
- 0.0
x769:
- 0.0
- 0.0
- 0.0
x77:
- 0.0
- 0.0
- 0.0
x770:
- 0.0
- 0.0
- 0.0
x771:
- 0.0
- 0.0
- 0.0
x772:
- 0.0
- 0.0
- 0.0
x773:
- 0.0
- 0.0
- 0.0
x774:
- 0.0
- 0.0
- 0.0
x775:
- 0.0
- 0.0
- 0.0
x776:
- 0.0
- 0.0
- 0.0
x777:
- 0.0
- 0.0
- 0.0
x778:
- 0.0
- 0.0
- 0.0
x779:
- 0.0
- 0.0
- 0.0
x78:
- 0.0
- 0.0
- 0.0
x780:
- 0.0
- 0.0
- 0.0
x781:
- 0.0
- 0.0
- 0.0
x782:
- 0.0
- 0.0
- 0.0
x783:
- 0.0
- 0.0
- 0.0
x784:
- 0.0
- 0.0
- 0.0
x785:
- 0.0
- 0.0
- 0.0
x786:
- 0.0
- 0.0
- 0.0
x787:
- 0.0
- 0.0
- 0.0
x788:
- 0.0
- 0.0
- 0.0
x789:
- 0.0
- 0.0
- 0.0
x79:
- 0.0
- 0.0
- 0.0
x790:
- 0.0
- 0.0
- 0.0
x791:
- 0.0
- 0.0
- 0.0
x792:
- 0.0
- 0.0
- 0.0
x793:
- 0.0
- 0.0
- 0.0
x794:
- 0.0
- 0.0
- 0.0
x795:
- 0.0
- 0.0
- 0.0
x796:
- 0.0
- 0.0
- 0.0
x797:
- 0.0
- 0.0
- 0.0
x798:
- 0.0
- 0.0
- 0.0
x799:
- 0.0
- 0.0
- 0.0
x8:
- 0.0
- 0.0
- 0.0
x80:
- 0.0
- 0.0
- 0.0
x800:
- 0.0
- 0.0
- 0.0
x801:
- 0.0
- 0.0
- 0.0
x802:
- 0.0
- 0.0
- 0.0
x803:
- 0.0
- 0.0
- 0.0
x804:
- 0.0
- 0.0
- 0.0
x805:
- 0.0
- 0.0
- 0.0
x806:
- 0.0
- 0.0
- 0.0
x807:
- 0.0
- 0.0
- 0.0
x808:
- 0.0
- 0.0
- 0.0
x809:
- 0.0
- 0.0
- 0.0
x81:
- 0.0
- 0.0
- 0.0
x810:
- 0.0
- 0.0
- 0.0
x811:
- 0.0
- 0.0
- 0.0
x812:
- 0.0
- 0.0
- 0.0
x813:
- 0.0
- 0.0
- 0.0
x814:
- 0.0
- 0.0
- 0.0
x815:
- 0.0
- 0.0
- 0.0
x816:
- 0.0
- 0.0
- 0.0
x817:
- 0.0
- 0.0
- 0.0
x818:
- 0.0
- 0.0
- 0.0
x819:
- 0.0
- 0.0
- 0.0
x82:
- 0.0
- 0.0
- 0.0
x820:
- 0.0
- 0.0
- 0.0
x821:
- 0.0
- 0.0
- 0.0
x822:
- 0.0
- 0.0
- 0.0
x823:
- 0.0
- 0.0
- 0.0
x824:
- 0.0
- 0.0
- 0.0
x825:
- 0.0
- 0.0
- 0.0
x826:
- 0.0
- 0.0
- 0.0
x827:
- 0.0
- 0.0
- 0.0
x828:
- 0.0
- 0.0
- 0.0
x829:
- 0.0
- 0.0
- 0.0
x83:
- 0.0
- 0.0
- 0.0
x830:
- 0.0
- 0.0
- 0.0
x831:
- 0.0
- 0.0
- 0.0
x832:
- 0.0
- 0.0
- 0.0
x833:
- 0.0
- 0.0
- 0.0
x834:
- 0.0
- 0.0
- 0.0
x835:
- 0.0
- 0.0
- 0.0
x836:
- 0.0
- 0.0
- 0.0
x837:
- 0.0
- 0.0
- 0.0
x838:
- 0.0
- 0.0
- 0.0
x839:
- 0.0
- 0.0
- 0.0
x84:
- 0.0
- 0.0
- 0.0
x840:
- 0.0
- 0.0
- 0.0
x841:
- 0.0
- 0.0
- 0.0
x842:
- 0.0
- 0.0
- 0.0
x843:
- 0.0
- 0.0
- 0.0
x844:
- 0.0
- 0.0
- 0.0
x845:
- 0.0
- 0.0
- 0.0
x846:
- 0.0
- 0.0
- 0.0
x847:
- 0.0
- 0.0
- 0.0
x848:
- 0.0
- 0.0
- 0.0
x849:
- 0.0
- 0.0
- 0.0
x85:
- 0.0
- 0.0
- 0.0
x850:
- 0.0
- 0.0
- 0.0
x851:
- 0.0
- 0.0
- 0.0
x852:
- 0.0
- 0.0
- 0.0
x853:
- 0.0
- 0.0
- 0.0
x854:
- 0.0
- 0.0
- 0.0
x855:
- 0.0
- 0.0
- 0.0
x856:
- 0.0
- 0.0
- 0.0
x857:
- 0.0
- 0.0
- 0.0
x858:
- 0.0
- 0.0
- 0.0
x859:
- 0.0
- 0.0
- 0.0
x86:
- 0.0
- 0.0
- 0.0
x860:
- 0.0
- 0.0
- 0.0
x861:
- 0.0
- 0.0
- 0.0
x862:
- 0.0
- 0.0
- 0.0
x863:
- 0.0
- 0.0
- 0.0
x864:
- 0.0
- 0.0
- 0.0
x865:
- 0.0
- 0.0
- 0.0
x866:
- 0.0
- 0.0
- 0.0
x867:
- 0.0
- 0.0
- 0.0
x868:
- 0.0
- 0.0
- 0.0
x869:
- 0.0
- 0.0
- 0.0
x87:
- 0.0
- 0.0
- 0.0
x870:
- 0.0
- 0.0
- 0.0
x871:
- 0.0
- 0.0
- 0.0
x872:
- 0.0
- 0.0
- 0.0
x873:
- 0.0
- 0.0
- 0.0
x874:
- 0.0
- 0.0
- 0.0
x875:
- 0.0
- 0.0
- 0.0
x876:
- 0.0
- 0.0
- 0.0
x877:
- 0.0
- 0.0
- 0.0
x878:
- 0.0
- 0.0
- 0.0
x879:
- 0.0
- 0.0
- 0.0
x88:
- 0.0
- 0.0
- 0.0
x880:
- 0.0
- 0.0
- 0.0
x881:
- 0.0
- 0.0
- 0.0
x882:
- 0.0
- 0.0
- 0.0
x883:
- 0.0
- 0.0
- 0.0
x884:
- 0.0
- 0.0
- 0.0
x885:
- 0.0
- 0.0
- 0.0
x886:
- 0.0
- 0.0
- 0.0
x887:
- 0.0
- 0.0
- 0.0
x888:
- 0.0
- 0.0
- 0.0
x889:
- 0.0
- 0.0
- 0.0
x89:
- 0.0
- 0.0
- 0.0
x890:
- 0.0
- 0.0
- 0.0
x891:
- 0.0
- 0.0
- 0.0
x892:
- 0.0
- 0.0
- 0.0
x893:
- 0.0
- 0.0
- 0.0
x894:
- 0.0
- 0.0
- 0.0
x895:
- 0.0
- 0.0
- 0.0
x896:
- 0.0
- 0.0
- 0.0
x897:
- 0.0
- 0.0
- 0.0
x898:
- 0.0
- 0.0
- 0.0
x899:
- 0.0
- 0.0
- 0.0
x9:
- 0.0
- 0.0
- 0.0
x90:
- 0.0
- 0.0
- 0.0
x900:
- 0.0
- 0.0
- 0.0
x901:
- 0.0
- 0.0
- 0.0
x902:
- 0.0
- 0.0
- 0.0
x903:
- 0.0
- 0.0
- 0.0
x904:
- 0.0
- 0.0
- 0.0
x905:
- 0.0
- 0.0
- 0.0
x906:
- 0.0
- 0.0
- 0.0
x907:
- 0.0
- 0.0
- 0.0
x908:
- 0.0
- 0.0
- 0.0
x909:
- 0.0
- 0.0
- 0.0
x91:
- 0.0
- 0.0
- 0.0
x910:
- 0.0
- 0.0
- 0.0
x911:
- 0.0
- 0.0
- 0.0
x912:
- 0.0
- 0.0
- 0.0
x913:
- 0.0
- 0.0
- 0.0
x914:
- 0.0
- 0.0
- 0.0
x915:
- 0.0
- 0.0
- 0.0
x916:
- 0.0
- 0.0
- 0.0
x917:
- 0.0
- 0.0
- 0.0
x918:
- 0.0
- 0.0
- 0.0
x919:
- 0.0
- 0.0
- 0.0
x92:
- 0.0
- 0.0
- 0.0
x920:
- 0.0
- 0.0
- 0.0
x921:
- 0.0
- 0.0
- 0.0
x922:
- 0.0
- 0.0
- 0.0
x923:
- 0.0
- 0.0
- 0.0
x924:
- 0.0
- 0.0
- 0.0
x925:
- 0.0
- 0.0
- 0.0
x926:
- 0.0
- 0.0
- 0.0
x927:
- 0.0
- 0.0
- 0.0
x928:
- 0.0
- 0.0
- 0.0
x929:
- 0.0
- 0.0
- 0.0
x93:
- 0.0
- 0.0
- 0.0
x930:
- 0.0
- 0.0
- 0.0
x931:
- 0.0
- 0.0
- 0.0
x932:
- 0.0
- 0.0
- 0.0
x933:
- 0.0
- 0.0
- 0.0
x934:
- 0.0
- 0.0
- 0.0
x935:
- 0.0
- 0.0
- 0.0
x936:
- 0.0
- 0.0
- 0.0
x937:
- 0.0
- 0.0
- 0.0
x938:
- 0.0
- 0.0
- 0.0
x939:
- 0.0
- 0.0
- 0.0
x94:
- 0.0
- 0.0
- 0.0
x940:
- 0.0
- 0.0
- 0.0
x941:
- 0.0
- 0.0
- 0.0
x942:
- 0.0
- 0.0
- 0.0
x943:
- 0.0
- 0.0
- 0.0
x944:
- 0.0
- 0.0
- 0.0
x945:
- 0.0
- 0.0
- 0.0
x946:
- 0.0
- 0.0
- 0.0
x947:
- 0.0
- 0.0
- 0.0
x948:
- 0.0
- 0.0
- 0.0
x949:
- 0.0
- 0.0
- 0.0
x95:
- 0.0
- 0.0
- 0.0
x950:
- 0.0
- 0.0
- 0.0
x951:
- 0.0
- 0.0
- 0.0
x952:
- 0.0
- 0.0
- 0.0
x953:
- 0.0
- 0.0
- 0.0
x954:
- 0.0
- 0.0
- 0.0
x955:
- 0.0
- 0.0
- 0.0
x956:
- 0.0
- 0.0
- 0.0
x957:
- 0.0
- 0.0
- 0.0
x958:
- 0.0
- 0.0
- 0.0
x959:
- 0.0
- 0.0
- 0.0
x96:
- 0.0
- 0.0
- 0.0
x960:
- 0.0
- 0.0
- 0.0
x961:
- 0.0
- 0.0
- 0.0
x962:
- 0.0
- 0.0
- 0.0
x963:
- 0.0
- 0.0
- 0.0
x964:
- 0.0
- 0.0
- 0.0
x965:
- 0.0
- 0.0
- 0.0
x966:
- 0.0
- 0.0
- 0.0
x967:
- 0.0
- 0.0
- 0.0
x968:
- 0.0
- 0.0
- 0.0
x969:
- 0.0
- 0.0
- 0.0
x97:
- 0.0
- 0.0
- 0.0
x970:
- 0.0
- 0.0
- 0.0
x971:
- 0.0
- 0.0
- 0.0
x972:
- 0.0
- 0.0
- 0.0
x973:
- 0.0
- 0.0
- 0.0
x974:
- 0.0
- 0.0
- 0.0
x975:
- 0.0
- 0.0
- 0.0
x976:
- 0.0
- 0.0
- 0.0
x977:
- 0.0
- 0.0
- 0.0
x978:
- 0.0
- 0.0
- 0.0
x979:
- 0.0
- 0.0
- 0.0
x98:
- 0.0
- 0.0
- 0.0
x980:
- 0.0
- 0.0
- 0.0
x981:
- 0.0
- 0.0
- 0.0
x982:
- 0.0
- 0.0
- 0.0
x983:
- 0.0
- 0.0
- 0.0
x984:
- 0.0
- 0.0
- 0.0
x985:
- 0.0
- 0.0
- 0.0
x986:
- 0.0
- 0.0
- 0.0
x987:
- 0.0
- 0.0
- 0.0
x988:
- 0.0
- 0.0
- 0.0
x989:
- 0.0
- 0.0
- 0.0
x99:
- 0.0
- 0.0
- 0.0
x990:
- 0.0
- 0.0
- 0.0
x991:
- 0.0
- 0.0
- 0.0
x992:
- 0.0
- 0.0
- 0.0
x993:
- 0.0
- 0.0
- 0.0
x994:
- 0.0
- 0.0
- 0.0
x995:
- 0.0
- 0.0
- 0.0
x996:
- 0.0
- 0.0
- 0.0
x997:
- 0.0
- 0.0
- 0.0
x998:
- 0.0
- 0.0
- 0.0
x999:
- 0.0
- 0.0
- 0.0
---
# Model description
Middle Dutch NER with PassiveAgressiveClassifier
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
TESTING
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|---------------------|---------|
| C | 1.0 |
| average | False |
| class_weight | |
| early_stopping | False |
| fit_intercept | True |
| loss | hinge |
| max_iter | 1000 |
| n_iter_no_change | 5 |
| n_jobs | |
| random_state | 42 |
| shuffle | True |
| tol | 0.001 |
| validation_fraction | 0.1 |
| verbose | 0 |
| warm_start | False |
</details>
### Model Plot
The model plot is below.
<style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>PassiveAggressiveClassifier(random_state=42)</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" checked><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">PassiveAggressiveClassifier</label><div class="sk-toggleable__content"><pre>PassiveAggressiveClassifier(random_state=42)</pre></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|-------------------------|----------|
| accuracy including 'O' | 0.901851 |
| f1 score including 'O | 0.901851 |
| precision excluding 'O' | 0.647887 |
| recall excluding 'O' | 0.572852 |
| f1 excluding 'O' | 0.608063 |
### Confusion Matrix

# How to Get Started with the Model
[More Information Needed]
# Model Card Authors
Alassea TEST
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
**BibTeX**
```
@inproceedings{...,year={2022}}
```
|
Devmapall/paraphrase-quora | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 3 | null | ---
language:
- en
tags:
- pytorch
- causal-lm
license: creativeml-openrail-m
--- |
Devrim/prism-default | [
"license:mit"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | Access to model hongS/donut-base-hangul_sep is restricted and you are not in the authorized list. Visit https://huggingface.co/hongS/donut-base-hangul_sep to ask for access. |
DewiBrynJones/wav2vec2-large-xlsr-welsh | [
"cy",
"dataset:common_voice",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 870.00 +/- 429.23
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AmineEA -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AmineEA -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AmineEA
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 5000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Dhruva/Interstellar | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 298.46 +/- 16.39
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DicoTiar/wisdomfiy | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: mit
---
CodeGen trained on decompiled ghidra C source code with the capability for answering questions such as:
1. What is the purpose of the code?
2. What language is the code written in (c/c++)?
|
DiegoBalam12/institute_classification | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- br
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Large V2 Breton
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 br
type: mozilla-foundation/common_voice_11_0
config: br
split: test
args: br
metrics:
- name: Wer
type: wer
value: 35.10767627648489
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large V2 Breton
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 br dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6425
- Wer: 35.1077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0065 | 5.03 | 3000 | 0.6425 | 35.1077 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
Digakive/Hsgshs | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2500 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2500,
"warmup_steps": 250,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Dimedrolza/DialoGPT-small-cyberpunk | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language:
- es
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
- cer
model-index:
- name: Whisper Large Spanish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 es
type: mozilla-foundation/common_voice_11_0
config: es
split: test
args: es
metrics:
- name: WER
type: wer
value: 4.673613637544826
- name: CER
type: cer
value: 1.5573247819517182
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs es_419
type: google/fleurs
config: es_419
split: test
args: es_419
metrics:
- name: WER
type: wer
value: 5.396216546072705
- name: CER
type: cer
value: 3.450427960057061
---
# Whisper Large Spanish
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on Spanish using the train split of [Common Voice 11](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0).
## Usage
```python
from transformers import pipeline
transcriber = pipeline(
"automatic-speech-recognition",
model="jonatasgrosman/whisper-large-es-cv11"
)
transcriber.model.config.forced_decoder_ids = (
transcriber.tokenizer.get_decoder_prompt_ids(
language="es",
task="transcribe"
)
)
transcription = transcriber("path/to/my_audio.wav")
```
## Evaluation
I've performed the evaluation of the model using the test split of two datasets, the [Common Voice 11](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) (same dataset used for the fine-tuning) and the [Fleurs](https://huggingface.co/datasets/google/fleurs) (dataset not seen during the fine-tuning). As Whisper can transcribe casing and punctuation, I've performed the model evaluation in 2 different scenarios, one using the raw text and the other using the normalized text (lowercase + removal of punctuations). Additionally, for the Fleurs dataset, I've evaluated the model in a scenario where there are no transcriptions of numerical values since the way these values are described in this dataset is different from how they are described in the dataset used in fine-tuning (Common Voice), so it is expected that this difference in the way of describing numerical values will affect the performance of the model for this type of transcription in Fleurs.
### Common Voice 11
| | CER | WER |
| --- | --- | --- |
| [jonatasgrosman/whisper-large-es-cv11](https://huggingface.co/jonatasgrosman/whisper-large-es-cv11) | 2.43 | 8.85 |
| [jonatasgrosman/whisper-large-es-cv11](https://huggingface.co/jonatasgrosman/whisper-large-es-cv11) + text normalization | 1.56 | 4.67 |
| [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) | 3.71 | 12.34 |
| [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) + text normalization | 2.45 | 6.30 |
### Fleurs
| | CER | WER |
| --- | --- | --- |
| [jonatasgrosman/whisper-large-es-cv11](https://huggingface.co/jonatasgrosman/whisper-large-es-cv11) | 3.06 | 9.11 |
| [jonatasgrosman/whisper-large-es-cv11](https://huggingface.co/jonatasgrosman/whisper-large-es-cv11) + text normalization | 3.45 | 5.40 |
| [jonatasgrosman/whisper-large-es-cv11](https://huggingface.co/jonatasgrosman/whisper-large-es-cv11) + keep only non-numeric samples | 1.83 | 7.57 |
| [jonatasgrosman/whisper-large-es-cv11](https://huggingface.co/jonatasgrosman/whisper-large-es-cv11) + text normalization + keep only non-numeric samples | 2.36 | 4.14 |
| [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) | 2.30 | 8.50 |
| [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) + text normalization | 2.76 | 4.79 |
| [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) + keep only non-numeric samples | 1.93 | 7.33 |
| [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) + text normalization + keep only non-numeric samples | 2.50 | 4.28 |
|
DingleyMaillotUrgell/homer-bot | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1340485939626463235/7v6vswcR_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Skeppy</div>
<div style="text-align: center; font-size: 14px;">@skeppy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Skeppy.
| Data | Skeppy |
| --- | --- |
| Tweets downloaded | 2025 |
| Retweets | 16 |
| Short tweets | 762 |
| Tweets kept | 1247 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2oc1nqe2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @skeppy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/108g0alr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/108g0alr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/skeppy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Dizoid/Lll | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
datasets:
- wikitext
language:
- en
pipeline_tag: fill-mask
---
**NOTE: THIS MODEL IS NOT INTEGRATED WITH HUGGING FACE**. Please use the version of this model converted to the newly implemented `Mega`
architecture in `transformers` ([link](https://huggingface.co/mnaylor/mega-base-wikitext))
# Moving Average Gated Attention (Mega): Pretrained LM
This repo contains pretrained weights for a language model with the Mega architecture (see [paper](https://arxiv.org/abs/2209.10655)).
I used the Mega source code (namely the `MegaEncoderLayer` class) and created wrappers for token embeddings and MLM prediction. This model
was pretrained for 5 epochs (11.3k gradient steps) on wikitext-103, which took roughly 5 hours on a single T4 (in Colab's free tier).
See [the Colab notebook](https://colab.research.google.com/drive/1qfUO6o5HRdxBblWlw058HVyvaEPhPpH8?usp=sharing)
for further training details. In order to load the pretrained weights for this model, you'll need to use the
[Mega repo](https://github.com/facebookresearch/mega) along with the example code at the end of the Colab notebook. |
Dkwkk/Da | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
thumbnail: "https://s1.fileditch.ch/gpghnxbfNyIINXmJAJCx.png"
language:
- en
tags:
- text-to-image
- stable-Diffusion
- stable-diffusion-diffusers
- diffusers
- safetensors
inference: true
---
<p align="center"> <img src="https://s1.fileditch.ch/iMqyjOnUtxntHolBiNgT.png" width=35% height=35%> <p>
<p align="center"> Baka-Diffusion - A latent diffusion model fine-tuned to output High Quality anime illustrations!**
<img src="https://s1.fileditch.ch/gpghnxbfNyIINXmJAJCx.png">
<p>
________
# Baka Diffusion V1
Welcome to Baka-Diffusion! a latent diffusion model Trained and Fine-Tuned on **High Quality** images using the **Danbooru** tagging dataset! our models are made to output better lighting and quality with just a few tags!
e.g. **_masterpiece, best quality, 1girl, hakurei reimu, chromatic aberration, white background,_**
________
# **The model was made to use with Latent Space Upscaling in mind**
*Below are some examples generated with this model!*
<img src=https://s1.fileditch.ch/ansGBCGYWyDhaJJGTnE.png width=100% height=100%>
```
masterpiece, best quality, 1girl, hakurei reimu, chromatic aberration, white background,
Negative prompt: lowres, (bad anatomy), text, cropped, worst quality, low quality, jpeg artifacts, signature, watermark, username, artist name, (out of frame), black and white, obese
```
# Versions!
Our models have 2 versions FP16 and FP32 from my testing FP32 can create more complex backgrounds where as for the FP16 it creates simpler backgrounds
# Usage!
- DPM++ SDE Karras seems to be the best sampler for BakaV1
- ClipSkip 1
- Upscaler (Latent)
- Denoise Strength (0.5~0.7)
- below 0.5 Denoise Blocking can occur
## Changelog!
- Removed Ckpt versions 1/16/23
- Added Diffuser weights! 1/16/23
- Model page overhaul 1/16/23
## This Project would be impossible without
- [HaisenBerg](https://huggingface.co/haisenberguwu)
# Baka-Diffusion Version2
Baka-DiffusionV2 has been delayed :( I have been working on the model for weeks and have come to a dead end, There were working test versions but the quality isn't as notable as i thought it would be, It felt like a Version1.5 than a V2 so i have restarted my entire procress but eventually it will come out but I cannot pin point an exact date so please be patient!
<img src="https://s1.fileditch.ch/ryvDJqVPZGmttyAYUCPQ.png">
Many thanks,
Hosioka.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
# License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the **Model** to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
--------------------------------------------------------------------------------------
# TOS!
- You are not allowed run on services that generate images for money!
- You are not allowed to Sell images they generate!
- You are not allowed to Sell this model or merges using this model!
If a user is found to be selling my models or offering generation services you will face with consequences!
---------------------------------------------------------------------------------------
These models can easily be loaded to a google colab and used for FREE! stop milking my shit bruh
heres an example colab: https://colab.research.google.com/drive/1wEa-tS10h4LlDykd87TF5zzpXIIQoCmq
|
Dmitry12/sber | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
This model (extreme learning machine, a shallow neural net) was trained in R, to reproject sentence transformers ('all-mpnet-base-v2') into wikidata5m knowledge graph embeddings (rotate version).
It is stored with fastsave (https://github.com/barkasn/fastSave_, depends on the library elmNNRcpp
(https://cran.r-project.org/web/packages/elmNNRcpp/index.html).
It was trained on a subset of the knowledge graph embeddings (400k version) of entities, which matched with the common sense knowledge graph or the entities in numberbatch.
I intend to redo this with a larger subset / full data.
|
Doiman/DialoGPT-medium-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: apache-2.0
tags:
- object-detection
- computer-vision
- yolox
- yolov3
- yolov5
datasets:
- detection-datasets/coco
---
### Model Description
[YOLOX](https://arxiv.org/abs/2107.08430) is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported.
[YOLOXDetect-Pip](https://github.com/kadirnar/yolox-pip/): This repo is a packaged version of the [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) for easy installation and use.
[Paper Repo]: Implementation of paper - [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX)
### Installation
```
pip install yoloxdetect
```
### Yolox Inference
```python
from yoloxdetect import YoloxDetector
from yolox.data.datasets import COCO_CLASSES
model = YoloxDetector(
model_path = "kadirnar/yolox_s-v0.1.1",
config_path = "configs.yolox_s",
device = "cuda:0",
hf_model=True
)
model.classes = COCO_CLASSES
model.conf = 0.25
model.iou = 0.45
model.show = False
model.save = True
pred = model.predict(image='data/images', img_size=640)
```
### BibTeX Entry and Citation Info
```
@article{yolox2021,
title={YOLOX: Exceeding YOLO Series in 2021},
author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
journal={arXiv preprint arXiv:2107.08430},
year={2021}
}
``` |
DongHyoungLee/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
license: creativeml-openrail-m
language:
- en
thumbnail: "https://huggingface.co/Norod78/SD15-VinageStyle/resolve/main/sample_images/SD15-VintageStyle-Thumbnail.jpg"
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-diffusers
datasets:
- Norod78/vintage-blip-captions
inference: true
widget:
- text: A Pulp Cover featuring Gal Gadot, very detailed, clean, high quality, sharp image, Saturno Butto
- text: A photo of an astronaut riding a horse on mars, Vintage style, Pulp Cover, very detailed, clean, high quality, sharp image, Dave Dorman
- text: A beatiful person, Vintage face
- text: A Vintage style commercial for cat food
---
# SDv1.5 SD15-VinageStyle model, trained by Norod78 in two parts.
### First Stable-Diffusion v1.5 fine-tuned for 10k steps using [Huggingface Diffusers train_text_to_image script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) upon [Norod78/vintage-blip-captions](https://huggingface.co/datasets/Norod78/vintage-blip-captions) then it underwent further fine tuning with Dreambooth using the same images as the ones in the dataset but rather then having it blip-captioned, it was split into "Vintage style", "Vintage face" and "Pulp cover" concepts.
### Dreambooth model was trained with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
## Because the model was first fined-tuned on the whole dataset and only then it was fine-tuned again to learn each individual concept, you can use prompts without Trigger-Words and still get a subtle "Vintage" touch
# Trigger-Words are: "Vintage", "Vintage style", "Vintage face", "Pulp cover"

## A few sample pictures generated with this mode (more available [here](https://huggingface.co/Norod78/SD15-VinageStyle/tree/main/sample_images)):
A photo of Gal Gadot as wonderwoman, Vintage style, very detailed, clean, high quality, sharp image.Negative prompt: grainy, blurry, text, watermark, inconsistent, smudged.Steps: 40, Sampler: DPM++ 2M Karras, CFG scale: 7.5, Seed: 3486356206, Face restoration: CodeFormer, Size: 512x512, Model hash: 33006be6, Model: VintageStyle, Batch size: 4, Batch pos: 2

A photo of Gal Gadot as wonderwoman fighting against Cthulhu, Vintage, very detailed, clean, high quality, sharp image, ,Naoto Hattori.Negative prompt: grainy, blurry, text, watermark, inconsistent, smudged.Steps: 40, Sampler: DPM++ 2M Karras, CFG scale: 7.5, Seed: 3408435550, Face restoration: CodeFormer, Size: 512x512, Model hash: 33006be6, Model: VintageStyle, Batch size: 4, Batch pos: 3
 |
DongHyoungLee/kogpt2-base-v2-finetuned-kogpt2_nsmc_single_sentence_classification | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- it
thumbnail: "url to a thumbnail used in social sharing"
tags:
- psychology
- distilgpt2
widget:
- text: "Il cognitivismo"
- text: "Parliamo di attaccamento"
- text: "Le cause del disturbo d'ansia nei bambini sono"
---
# Italian Psychology DistilGPT-2
This model is a fine-tuned version of the GPT-2 language model, trained on a dataset of Italian psychology articles.
It is capable of generating human-like text on topics related to psychology and mental health in Italian language.
## Model details
The base model used for fine-tuning is the GPT-2 model, with a transformer architecture and 1250M parameters.
The fine-tuning dataset consists of approximately 10,000 Italian psychology articles.
## Example usage
```python
from transformers import pipeline
nlp = pipeline("text-generation", model="misterkilgore/distilgpt2-psy-ita")
generated_text = nlp("Le cause del disturbo d'ansia nei bambini sono", max_length=100)
print(generated_text)
```
## Limitations and bias
This model has been trained on a dataset of Italian psychology articles and may not perform well on other types of text or in other languages.
Additionally, the dataset used to fine-tune the model may contain biases and limitations, which will be reflected in the generated text.
## Dataset
The dataset used to fine-tune this model is composed of Italian psychology articles.
It contains various topics on mental health and psychology, but some limitations and biases may be present. This model is meant to be used only for research and educational purposes.
## Training data
The training data is composed of Italian psychology articles.
Fine-tuning was performed on this dataset to adapt the base GPT-2 model to the specific topic of psychology and mental health in Italian language.
|
Donghyun/L2_BERT | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.59 +/- 42.43
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Dongjae/mrc2reader | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="yanick/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Dongmin/testmodel | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 11 | null | ---
library_name: sklearn
datasets:
- scikit-learn/iris
metrics:
- Accuracy
- F1 Score
- Confusion matrix
---
# Model Card for Logistic regression model
A multiclass classification task using a Logistic regression model trained on the [Iris dataset](https://scikit-learn.org/dev/modules/generated/sklearn.datasets.load_iris.html), developed in Scikit-Learn version 1.0 and loaded in both version 1.0 and 1.1.
# Model Details
## Model Description
- **Developed by:** Adebayo Chibundum
- **Model type:** Logistic Regression
- **License:** None
# Training Details
## Training Data
- [Iris dataset](https://scikit-learn.org/dev/modules/generated/sklearn.datasets.load_iris.html)
# Evaluation
### Metrics and Results
- **Accuracy:** 1.0
- **Macro-averaged F1 Score:** 1.0
- **Confusion Matrix** |
Doogie/Waynehills-KE-T5-doogie | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
thumbnail: "https://huggingface.co/jkcarney/source4_v1.0/blob/main/images/woman1.png"
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- safetensors
- diffusers
inference: true
---

[*CKPT DOWNLOAD LINK*](https://huggingface.co/jkcarney/source4_v1.0/blob/main/source4_v1.0.ckpt)
[*SAFETENSORS DOWNLOAD LINK*](https://huggingface.co/jkcarney/source4_v1.0/blob/main/source4_v1.0.safetensors)
# Introduction
Source4 is a Stable Diffusion 1.5/Dreambooth model trained on portraits with dramatic, multicolored lighting. These portraits are heavily inspired by modern, multicolored theatrical lighting (hence the name Source4). This model was fine tuned without prior preservation loss for 8000 steps on over 90 high quality 512x512 images.
In the prompt, use activation token `source4 style`

# Recommendations
I recommend experimenting with different colors in your prompt. For example, adding `teal tones` or `red tones` to your prompt bring those respective colors out. Also try a smaller weight on the `source4 style` prompt (ie, `(source4 style:0.8)`), for a slightly less pronounced but more realistic effect.
`Euler a` and `DPM++ 2S a Karras` for 25-30 steps have typically given me the highest quality images. Euler a typically gives me more "dreamy" portraits while DPM++ 2S a Karras gives me more realistic portraits.
If the effect is too pronounced or faces are totally blurred out, I recommend decreasing the weight of the activation token. An alternative is to specify `detailed face` in the prompt to ensure a face is actually drawn.
# Additional Samples






Welcome to the technicolored world.
|
Waynehillsdev/Waynehills_summary_tensorflow | [
"tf",
"t5",
"text2text-generation",
"transformers",
"generated_from_keras_callback",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
model-index:
- name: bloom-560m-finetuned-unnatural-instructions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom-560m-finetuned-unnatural-instructions
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6937 | 0.32 | 1000 | 1.6739 |
| 1.5527 | 0.63 | 2000 | 1.5767 |
| 1.5305 | 0.95 | 3000 | 1.5221 |
| 1.1514 | 1.26 | 4000 | 1.5201 |
| 1.1564 | 1.58 | 5000 | 1.5042 |
| 1.1365 | 1.89 | 6000 | 1.4799 |
| 0.7729 | 2.21 | 7000 | 1.6496 |
| 0.7713 | 2.52 | 8000 | 1.5909 |
| 0.8063 | 2.84 | 9000 | 1.6073 |
| 0.4753 | 3.15 | 10000 | 1.9611 |
| 0.4719 | 3.47 | 11000 | 2.0177 |
| 0.4732 | 3.79 | 12000 | 2.0341 |
| 0.2747 | 4.1 | 13000 | 2.5669 |
| 0.2582 | 4.42 | 14000 | 2.6801 |
| 0.2751 | 4.73 | 15000 | 2.6907 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Waynehillsdev/wav2vec2-base-timit-demo-colab | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | This model is converted with https://github.com/huggingface/diffusers/blob/main/scripts/convert_original_stable_diffusion_to_diffusers.py.
However, the tokenizer in the diffuser model is wrong, for proper usage, see description at https://medium.com/@enryu9000/anifusion-sd-91a59431a6dd, and instructions/examples at https://github.com/enryu43/anifusion2-stable-diffusion.
Also, the original checkpoint in the Latent Diffusion format is available.
Installation instructions for webui: https://gist.github.com/enryu43/fccaa7f165ffcb214780d203c565761f
|
Doohae/p_encoder | [
"pytorch"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 530.00 +/- 152.40
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Honza -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Honza -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Honza
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Doohae/q_encoder | [
"pytorch"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- wer
model-index:
- name: model_broadclass_onSet2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_broadclass_onSet2
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5931
- 0 Precision: 1.0
- 0 Recall: 0.9615
- 0 F1-score: 0.9804
- 0 Support: 26
- 1 Precision: 0.9730
- 1 Recall: 0.9231
- 1 F1-score: 0.9474
- 1 Support: 39
- 2 Precision: 1.0
- 2 Recall: 1.0
- 2 F1-score: 1.0
- 2 Support: 19
- 3 Precision: 0.8125
- 3 Recall: 1.0
- 3 F1-score: 0.8966
- 3 Support: 13
- Accuracy: 0.9588
- Macro avg Precision: 0.9464
- Macro avg Recall: 0.9712
- Macro avg F1-score: 0.9561
- Macro avg Support: 97
- Weighted avg Precision: 0.9640
- Weighted avg Recall: 0.9588
- Weighted avg F1-score: 0.9597
- Weighted avg Support: 97
- Wer: 0.6924
- Mtrix: [[0, 1, 2, 3], [0, 25, 1, 0, 0], [1, 0, 36, 0, 3], [2, 0, 0, 19, 0], [3, 0, 0, 0, 13]]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 80
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | 0 Precision | 0 Recall | 0 F1-score | 0 Support | 1 Precision | 1 Recall | 1 F1-score | 1 Support | 2 Precision | 2 Recall | 2 F1-score | 2 Support | 3 Precision | 3 Recall | 3 F1-score | 3 Support | Accuracy | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Macro avg Support | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score | Weighted avg Support | Wer | Mtrix |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:--------:|:-------------------:|:----------------:|:------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------------:|:--------------------:|:------:|:---------------------------------------------------------------------------------------:|
| 2.3566 | 4.16 | 100 | 2.1836 | 0.2680 | 1.0 | 0.4228 | 26 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 19 | 0.0 | 0.0 | 0.0 | 13 | 0.2680 | 0.0670 | 0.25 | 0.1057 | 97 | 0.0718 | 0.2680 | 0.1133 | 97 | 0.9869 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 39, 0, 0, 0], [2, 19, 0, 0, 0], [3, 13, 0, 0, 0]] |
| 2.2923 | 8.33 | 200 | 2.1159 | 0.2680 | 1.0 | 0.4228 | 26 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 19 | 0.0 | 0.0 | 0.0 | 13 | 0.2680 | 0.0670 | 0.25 | 0.1057 | 97 | 0.0718 | 0.2680 | 0.1133 | 97 | 0.9869 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 39, 0, 0, 0], [2, 19, 0, 0, 0], [3, 13, 0, 0, 0]] |
| 1.9868 | 12.49 | 300 | 1.9923 | 0.2680 | 1.0 | 0.4228 | 26 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 19 | 0.0 | 0.0 | 0.0 | 13 | 0.2680 | 0.0670 | 0.25 | 0.1057 | 97 | 0.0718 | 0.2680 | 0.1133 | 97 | 0.9869 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 39, 0, 0, 0], [2, 19, 0, 0, 0], [3, 13, 0, 0, 0]] |
| 1.7313 | 16.65 | 400 | 1.6081 | 0.2680 | 1.0 | 0.4228 | 26 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 19 | 0.0 | 0.0 | 0.0 | 13 | 0.2680 | 0.0670 | 0.25 | 0.1057 | 97 | 0.0718 | 0.2680 | 0.1133 | 97 | 0.9869 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 39, 0, 0, 0], [2, 19, 0, 0, 0], [3, 13, 0, 0, 0]] |
| 1.6688 | 20.82 | 500 | 1.5971 | 0.2680 | 1.0 | 0.4228 | 26 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 19 | 0.0 | 0.0 | 0.0 | 13 | 0.2680 | 0.0670 | 0.25 | 0.1057 | 97 | 0.0718 | 0.2680 | 0.1133 | 97 | 0.9869 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 39, 0, 0, 0], [2, 19, 0, 0, 0], [3, 13, 0, 0, 0]] |
| 1.5888 | 24.98 | 600 | 1.6098 | 0.2680 | 1.0 | 0.4228 | 26 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 19 | 0.0 | 0.0 | 0.0 | 13 | 0.2680 | 0.0670 | 0.25 | 0.1057 | 97 | 0.0718 | 0.2680 | 0.1133 | 97 | 0.9869 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 39, 0, 0, 0], [2, 19, 0, 0, 0], [3, 13, 0, 0, 0]] |
| 1.5986 | 29.16 | 700 | 1.6984 | 0.2680 | 1.0 | 0.4228 | 26 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 19 | 0.0 | 0.0 | 0.0 | 13 | 0.2680 | 0.0670 | 0.25 | 0.1057 | 97 | 0.0718 | 0.2680 | 0.1133 | 97 | 0.9869 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 39, 0, 0, 0], [2, 19, 0, 0, 0], [3, 13, 0, 0, 0]] |
| 1.5437 | 33.33 | 800 | 1.4933 | 0.2680 | 1.0 | 0.4228 | 26 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 19 | 0.0 | 0.0 | 0.0 | 13 | 0.2680 | 0.0670 | 0.25 | 0.1057 | 97 | 0.0718 | 0.2680 | 0.1133 | 97 | 0.9869 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 39, 0, 0, 0], [2, 19, 0, 0, 0], [3, 13, 0, 0, 0]] |
| 1.1358 | 37.49 | 900 | 1.1118 | 0.2680 | 1.0 | 0.4228 | 26 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 19 | 0.0 | 0.0 | 0.0 | 13 | 0.2680 | 0.0670 | 0.25 | 0.1057 | 97 | 0.0718 | 0.2680 | 0.1133 | 97 | 0.9869 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 39, 0, 0, 0], [2, 19, 0, 0, 0], [3, 13, 0, 0, 0]] |
| 0.983 | 41.65 | 1000 | 1.0538 | 0.3171 | 1.0 | 0.4815 | 26 | 1.0 | 0.0256 | 0.05 | 39 | 1.0 | 0.3158 | 0.4800 | 19 | 0.875 | 0.5385 | 0.6667 | 13 | 0.4124 | 0.7980 | 0.4700 | 0.4195 | 97 | 0.8002 | 0.4124 | 0.3325 | 97 | 0.9732 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 37, 1, 0, 1], [2, 13, 0, 6, 0], [3, 6, 0, 0, 7]] |
| 0.96 | 45.82 | 1100 | 0.9324 | 0.4561 | 1.0 | 0.6265 | 26 | 1.0 | 0.3846 | 0.5556 | 39 | 1.0 | 0.6316 | 0.7742 | 19 | 1.0 | 1.0 | 1.0 | 13 | 0.6804 | 0.8640 | 0.7540 | 0.7391 | 97 | 0.8542 | 0.6804 | 0.6770 | 97 | 0.9510 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 24, 15, 0, 0], [2, 7, 0, 12, 0], [3, 0, 0, 0, 13]] |
| 0.9569 | 49.98 | 1200 | 0.9106 | 0.52 | 1.0 | 0.6842 | 26 | 1.0 | 0.6410 | 0.7813 | 39 | 1.0 | 0.6316 | 0.7742 | 19 | 1.0 | 0.7692 | 0.8696 | 13 | 0.7526 | 0.88 | 0.7605 | 0.7773 | 97 | 0.8713 | 0.7526 | 0.7657 | 97 | 0.9343 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 14, 25, 0, 0], [2, 7, 0, 12, 0], [3, 3, 0, 0, 10]] |
| 0.943 | 54.16 | 1300 | 0.9142 | 0.7879 | 1.0 | 0.8814 | 26 | 1.0 | 0.8205 | 0.9014 | 39 | 1.0 | 0.9474 | 0.9730 | 19 | 0.9286 | 1.0 | 0.9630 | 13 | 0.9175 | 0.9291 | 0.9420 | 0.9297 | 97 | 0.9336 | 0.9175 | 0.9183 | 97 | 0.9242 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 6, 32, 0, 1], [2, 1, 0, 18, 0], [3, 0, 0, 0, 13]] |
| 0.9177 | 58.33 | 1400 | 0.9201 | 0.7879 | 1.0 | 0.8814 | 26 | 1.0 | 0.7692 | 0.8696 | 39 | 1.0 | 1.0 | 1.0 | 19 | 0.8667 | 1.0 | 0.9286 | 13 | 0.9072 | 0.9136 | 0.9423 | 0.9199 | 97 | 0.9253 | 0.9072 | 0.9062 | 97 | 0.9197 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 7, 30, 0, 2], [2, 0, 0, 19, 0], [3, 0, 0, 0, 13]] |
| 0.873 | 62.49 | 1500 | 0.8556 | 0.8387 | 1.0 | 0.9123 | 26 | 1.0 | 0.8718 | 0.9315 | 39 | 1.0 | 0.9474 | 0.9730 | 19 | 0.9286 | 1.0 | 0.9630 | 13 | 0.9381 | 0.9418 | 0.9548 | 0.9449 | 97 | 0.9472 | 0.9381 | 0.9387 | 97 | 0.9293 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 4, 34, 0, 1], [2, 1, 0, 18, 0], [3, 0, 0, 0, 13]] |
| 0.798 | 66.65 | 1600 | 0.8133 | 0.8966 | 1.0 | 0.9455 | 26 | 1.0 | 0.8974 | 0.9459 | 39 | 1.0 | 1.0 | 1.0 | 19 | 0.9286 | 1.0 | 0.9630 | 13 | 0.9588 | 0.9563 | 0.9744 | 0.9636 | 97 | 0.9627 | 0.9588 | 0.9587 | 97 | 0.9071 | [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 3, 35, 0, 1], [2, 0, 0, 19, 0], [3, 0, 0, 0, 13]] |
| 0.7299 | 70.82 | 1700 | 0.7332 | 1.0 | 0.9615 | 0.9804 | 26 | 0.9744 | 0.9744 | 0.9744 | 39 | 1.0 | 1.0 | 1.0 | 19 | 0.9286 | 1.0 | 0.9630 | 13 | 0.9794 | 0.9757 | 0.9840 | 0.9794 | 97 | 0.9801 | 0.9794 | 0.9795 | 97 | 0.8636 | [[0, 1, 2, 3], [0, 25, 1, 0, 0], [1, 0, 38, 0, 1], [2, 0, 0, 19, 0], [3, 0, 0, 0, 13]] |
| 0.6432 | 74.98 | 1800 | 0.6808 | 1.0 | 0.9615 | 0.9804 | 26 | 0.9730 | 0.9231 | 0.9474 | 39 | 1.0 | 1.0 | 1.0 | 19 | 0.8125 | 1.0 | 0.8966 | 13 | 0.9588 | 0.9464 | 0.9712 | 0.9561 | 97 | 0.9640 | 0.9588 | 0.9597 | 97 | 0.7758 | [[0, 1, 2, 3], [0, 25, 1, 0, 0], [1, 0, 36, 0, 3], [2, 0, 0, 19, 0], [3, 0, 0, 0, 13]] |
| 0.6067 | 79.16 | 1900 | 0.5931 | 1.0 | 0.9615 | 0.9804 | 26 | 0.9730 | 0.9231 | 0.9474 | 39 | 1.0 | 1.0 | 1.0 | 19 | 0.8125 | 1.0 | 0.8966 | 13 | 0.9588 | 0.9464 | 0.9712 | 0.9561 | 97 | 0.9640 | 0.9588 | 0.9597 | 97 | 0.6924 | [[0, 1, 2, 3], [0, 25, 1, 0, 0], [1, 0, 36, 0, 3], [2, 0, 0, 19, 0], [3, 0, 0, 0, 13]] |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Doohae/roberta | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- conversational
---
# Peter Griffin DialoGPT Model |
DoyyingFace/bert-asian-hate-tweets-asian-clean-with-unclean-valid | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.46 +/- 20.00
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-4 | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 44 | null | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- sroie
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlm_manifesto_bigdataset
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: sroie
type: sroie
config: discharge
split: test
args: discharge
metrics:
- name: Precision
type: precision
value: 0.9910554561717353
- name: Recall
type: recall
value: 0.992831541218638
- name: F1
type: f1
value: 0.991942703670546
- name: Accuracy
type: accuracy
value: 0.9988607234406152
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm_manifesto_bigdataset
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the sroie dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0080
- Precision: 0.9911
- Recall: 0.9928
- F1: 0.9919
- Accuracy: 0.9989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 2.0 | 100 | 0.0820 | 0.9071 | 0.9104 | 0.9088 | 0.9855 |
| No log | 4.0 | 200 | 0.0291 | 0.9441 | 0.9677 | 0.9558 | 0.9946 |
| No log | 6.0 | 300 | 0.0121 | 0.9751 | 0.9821 | 0.9786 | 0.9977 |
| No log | 8.0 | 400 | 0.0089 | 0.9911 | 0.9946 | 0.9928 | 0.9989 |
| 0.1049 | 10.0 | 500 | 0.0083 | 0.9840 | 0.9892 | 0.9866 | 0.9983 |
| 0.1049 | 12.0 | 600 | 0.0077 | 0.9875 | 0.9928 | 0.9902 | 0.9986 |
| 0.1049 | 14.0 | 700 | 0.0081 | 0.9893 | 0.9910 | 0.9902 | 0.9986 |
| 0.1049 | 16.0 | 800 | 0.0081 | 0.9875 | 0.9892 | 0.9884 | 0.9983 |
| 0.1049 | 18.0 | 900 | 0.0081 | 0.9893 | 0.9910 | 0.9902 | 0.9986 |
| 0.0051 | 20.0 | 1000 | 0.0074 | 0.9822 | 0.9875 | 0.9848 | 0.9980 |
| 0.0051 | 22.0 | 1100 | 0.0083 | 0.9911 | 0.9928 | 0.9919 | 0.9989 |
| 0.0051 | 24.0 | 1200 | 0.0073 | 0.9893 | 0.9910 | 0.9902 | 0.9986 |
| 0.0051 | 26.0 | 1300 | 0.0070 | 0.9911 | 0.9928 | 0.9919 | 0.9989 |
| 0.0051 | 28.0 | 1400 | 0.0081 | 0.9911 | 0.9928 | 0.9919 | 0.9989 |
| 0.0022 | 30.0 | 1500 | 0.0080 | 0.9911 | 0.9928 | 0.9919 | 0.9989 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.2.2
- Tokenizers 0.13.2
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-100 | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
license: apache-2.0
tags:
- object-detection
- computer-vision
- yolox
- yolov3
- yolov5
datasets:
- detection-datasets/coco
---
### Model Description
[YOLOX](https://arxiv.org/abs/2107.08430) is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported.
[YOLOXDetect-Pip](https://github.com/kadirnar/yolox-pip/): This repo is a packaged version of the [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) for easy installation and use.
[Paper Repo]: Implementation of paper - [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX)
### Installation
```
pip install yoloxdetect
```
### Yolox Inference
```python
from yoloxdetect import YoloxDetector
from yolox.data.datasets import COCO_CLASSES
model = YoloxDetector(
model_path = "kadirnar/yolox_tiny-v0.1.1",
config_path = "configs.yolox_tiny",
device = "cuda:0",
hf_model=True
)
model.classes = COCO_CLASSES
model.conf = 0.25
model.iou = 0.45
model.show = False
model.save = True
pred = model.predict(image='data/images', img_size=640)
```
### BibTeX Entry and Citation Info
```
@article{yolox2021,
title={YOLOX: Exceeding YOLO Series in 2021},
author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
journal={arXiv preprint arXiv:2107.08430},
year={2021}
}
``` |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-25 | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | # Pile of Law Tokenizer
This tokenizer should be a drop-in replacement for the GPT2Tokenizer. It has the same special tokens, but was trained on a random 1M samples from [the pile of law](https://huggingface.co/datasets/pile-of-law/pile-of-law) train split.
It has exactly 52,000 tokens, which is not identical to GPT2.
Usage:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sam-mosaic/pile-of-law-tokenizer")
```
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-50 | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
license: apache-2.0
tags:
- object-detection
- computer-vision
- yolox
- yolov3
- yolov5
datasets:
- detection-datasets/coco
---
### Model Description
[YOLOX](https://arxiv.org/abs/2107.08430) is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported.
[YOLOXDetect-Pip](https://github.com/kadirnar/yolox-pip/): This repo is a packaged version of the [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) for easy installation and use.
[Paper Repo]: Implementation of paper - [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX)
### Installation
```
pip install yoloxdetect
```
### Yolox Inference
```python
from yoloxdetect import YoloxDetector
from yolox.data.datasets import COCO_CLASSES
model = YoloxDetector(
model_path = "kadirnar/yolox_nano-v0.1.1",
config_path = "configs.yolox_s",
device = "cuda:0",
hf_model=True
)
model.classes = COCO_CLASSES
model.conf = 0.25
model.iou = 0.45
model.show = False
model.save = True
pred = model.predict(image='data/images', img_size=640)
```
### BibTeX Entry and Citation Info
```
@article{yolox2021,
title={YOLOX: Exceeding YOLO Series in 2021},
author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
journal={arXiv preprint arXiv:2107.08430},
year={2021}
}
``` |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-75 | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 37 | null | ---
license: apache-2.0
tags:
- object-detection
- computer-vision
- yolox
- yolov3
- yolov5
datasets:
- detection-datasets/coco
---
### Model Description
[YOLOX](https://arxiv.org/abs/2107.08430) is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported.
[YOLOXDetect-Pip](https://github.com/kadirnar/yolox-pip/): This repo is a packaged version of the [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) for easy installation and use.
[Paper Repo]: Implementation of paper - [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX)
### Installation
```
pip install yoloxdetect
```
### Yolox Inference
```python
from yoloxdetect import YoloxDetector
from yolox.data.datasets import COCO_CLASSES
model = YoloxDetector(
model_path = "kadirnar/yolox_m-v0.1.1",
config_path = "configs.yolox_m",
device = "cuda:0",
hf_model=True
)
model.classes = COCO_CLASSES
model.conf = 0.25
model.iou = 0.45
model.show = False
model.save = True
pred = model.predict(image='data/images', img_size=640)
```
### BibTeX Entry and Citation Info
```
@article{yolox2021,
title={YOLOX: Exceeding YOLO Series in 2021},
author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
journal={arXiv preprint arXiv:2107.08430},
year={2021}
}
``` |
DoyyingFace/bert-asian-hate-tweets-concat-clean-with-unclean-valid | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.67
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Seif/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
albert-base-v1 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 38,156 | 2022-12-21T22:11:14Z | ---
license: apache-2.0
tags:
- object-detection
- computer-vision
- yolox
- yolov3
- yolov5
datasets:
- detection-datasets/coco
---
### Model Description
[YOLOX](https://arxiv.org/abs/2107.08430) is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported.
[YOLOXDetect-Pip](https://github.com/kadirnar/yolox-pip/): This repo is a packaged version of the [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) for easy installation and use.
[Paper Repo]: Implementation of paper - [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX)
### Installation
```
pip install yoloxdetect
```
### Yolox Inference
```python
from yoloxdetect import YoloxDetector
from yolox.data.datasets import COCO_CLASSES
model = YoloxDetector(
model_path = "kadirnar/yolox_l-v0.1.1",
config_path = "configs.yolox_l",
device = "cuda:0",
hf_model=True
)
model.classes = COCO_CLASSES
model.conf = 0.25
model.iou = 0.45
model.show = False
model.save = True
pred = model.predict(image='data/images', img_size=640)
```
### BibTeX Entry and Citation Info
```
@article{yolox2021,
title={YOLOX: Exceeding YOLO Series in 2021},
author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
journal={arXiv preprint arXiv:2107.08430},
year={2021}
}
``` |
albert-large-v2 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26,792 | 2022-12-21T22:12:09Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.67
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Seif/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
albert-xlarge-v2 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,973 | 2022-12-21T22:12:25Z | ---
license: apache-2.0
tags:
- object-detection
- computer-vision
- yolox
- yolov3
- yolov5
datasets:
- detection-datasets/coco
---
### Model Description
[YOLOX](https://arxiv.org/abs/2107.08430) is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported.
[YOLOXDetect-Pip](https://github.com/kadirnar/yolox-pip/): This repo is a packaged version of the [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) for easy installation and use.
[Paper Repo]: Implementation of paper - [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX)
### Installation
```
pip install yoloxdetect
```
### Yolox Inference
```python
from yoloxdetect import YoloxDetector
from yolox.data.datasets import COCO_CLASSES
model = YoloxDetector(
model_path = "kadirnar/yolox_x-v0.1.1",
config_path = "configs.yolox_x",
device = "cuda:0",
hf_model=True
)
model.classes = COCO_CLASSES
model.conf = 0.25
model.iou = 0.45
model.show = False
model.save = True
pred = model.predict(image='data/images', img_size=640)
```
### BibTeX Entry and Citation Info
```
@article{yolox2021,
title={YOLOX: Exceeding YOLO Series in 2021},
author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
journal={arXiv preprint arXiv:2107.08430},
year={2021}
}
``` |
bert-base-chinese | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"zh",
"arxiv:1810.04805",
"transformers",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3,377,486 | 2022-12-21T22:24:17Z | ---
language:
- el
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper medium Greek El Greco
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 el
type: mozilla-foundation/common_voice_11_0
config: el
split: test
args: el
metrics:
- name: Wer
type: wer
value: 9.899702823179792
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper medium Greek El Greco
This model is a fine-tuned version of [emilios/whisper-medium-el-n2](https://huggingface.co/emilios/whisper-medium-el-n2) on the mozilla-foundation/common_voice_11_0 el dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5669
- Wer: 9.8997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 11000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:-------:|
| 0.0014 | 58.82 | 1000 | 0.4951 | 10.3640 |
| 0.0006 | 117.65 | 2000 | 0.5181 | 10.2805 |
| 0.0007 | 175.82 | 3000 | 0.5317 | 10.1133 |
| 0.0004 | 234.65 | 4000 | 0.5396 | 10.1226 |
| 0.0004 | 293.47 | 5000 | 0.5532 | 10.1040 |
| 0.0013 | 352.29 | 6000 | 0.5645 | 10.0854 |
| 0.0002 | 411.12 | 7000 | 0.5669 | 10.1133 |
| 0.0001 | 469.94 | 8000 | 0.5669 | 9.8997 |
| 0.0001 | 528.76 | 9000 | 0.5645 | 9.9276 |
| 0.0001 | 587.82 | 10000 | 0.5674 | 9.9647 |
| 0.0003 | 646.82 | 11000 | 0.5669 | 9.9461 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 2.0.0.dev20221216+cu116
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
bert-base-german-dbmdz-cased | [
"pytorch",
"jax",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,814 | 2022-12-21T22:29:08Z | ---
language:
- or
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Odia
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 or
type: mozilla-foundation/common_voice_11_0
config: or
split: test
args: or
metrics:
- name: Wer
type: wer
value: 26.093088857545837
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Odia
This model is a fine-tuned version of [auro/whisper-cli-small-or](https://huggingface.co/auro/whisper-cli-small-or) on the mozilla-foundation/common_voice_11_0 or dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5089
- Wer: 26.0931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0 | 12.01 | 250 | 0.5089 | 26.0931 |
| 0.0 | 24.02 | 500 | 0.5542 | 26.6291 |
| 0.0 | 37.01 | 750 | 0.5804 | 26.5162 |
| 0.0 | 49.02 | 1000 | 0.5902 | 26.4598 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
bert-large-uncased-whole-word-masking-finetuned-squad | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 480,510 | 2022-12-21T22:53:51Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Isaacp/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bert-large-uncased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,058,496 | 2022-12-21T23:00:44Z | ---
language:
- uk
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
- google/fleurs
model-index:
- name: whisper-large-uk
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: uk
split: test
args: uk
metrics:
- name: Wer
type: wer
value: 10.02262314404669
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Fleurs
type: google/fleurs
config: uk_ua
split: test
args: uk_ua
metrics:
- name: Wer
type: wer
value: 7.564370215727209
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-uk
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2527
- eval_wer: 10.0226
- eval_runtime: 9610.7996
- eval_samples_per_second: 0.747
- eval_steps_per_second: 0.023
- epoch: 1.8
- step: 1098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
camembert-base | [
"pytorch",
"tf",
"safetensors",
"camembert",
"fill-mask",
"fr",
"dataset:oscar",
"arxiv:1911.03894",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"CamembertForMaskedLM"
],
"model_type": "camembert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,440,898 | null | ---
license: openrail++
tags:
- stable-diffusion
- text-to-image
- stable-diffusion-diffusers
- diffusers
inference: true
---
# .
# .
# .
# .
# .
# .
# ❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗
# This version is deprecated.
# Please use [Mitsua Diffusion One](https://huggingface.co/Mitsua/mitsua-diffusion-one), which is a successor of this model.
# ❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗❗
# .
# .
# .
# .
# .
# Mitsua Diffusion CC0 Model Card
Mitsua Diffusion CC0 is a latent text-to-image diffusion model, whose U-Net is **trained from scratch using only public domain/CC0 or copyright images with permission for use**.
Text Encoder and VAE are borrowed from [Stable Diffusion v2.1 base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base/).
This will be used as a base model for [**AI VTuber Elan Mitsua🖌️**](https://elanmitsua.com/en/)’s activity.
❗❗ **Currently the model has super low visual quality and limited diversity** ❗❗
Yes, the visual quality is not so good. Most of modern artistic concept is lost completely. However, since she is a growing AI in an ethical fashion, it would be good starting point for Mitsua-chan!
You can join [her training on Twitter](https://twitter.com/elanmitsua)! Please support Mitsua-chan!🎉
Further training will be done in a fully opt-in basis. If you are interested in, [please click here to submit an opt-in application](https://forms.gle/Nk3M7UyqSgYAqdpA6).
We are active on [a Discord server for opt-in participants only](https://discord.com/invite/7VTGRweTUg). Communication is currently in Japanese.

You can check [here to all prompts to generate these images](https://huggingface.co/Mitsua/mitsua-diffusion-cc0/resolve/main/images/mitsua_cc0_works_prompts.csv).
## Training Data Sources
All data was obtained ethically and in compliance with the site's terms and conditions.
No copyright images are used in the training of this model without the permission.
No AI generated images are in the dataset.
- Traditional Artwork in public domain / CC0
- MET Museum Open Access
- Smithsonian Open Access
- Cleveland Museum of Art Open Access
- National Gallery of Art Open Access
- ArtBench-10 (public domain subset)
- CC0 Photos
- Flickr, Wikimedia Commons
- CC0 NFTs *1
- goblintown.nft, mfer, tubby-cats, Timeless
- CC0 VRM models
- made by VRoid Project, pastelkies, yomox9 (all CC0 subset)
- We generated a bunch of synthesized images dataset rendered with various poses and camera angles.
- Copyright images with permission for use
- Generative and Visual Artworks made by Rhizomatiks
Approx 11M images in total with data augmentation.
1. Their work is released under a CC0 license, but if you are considering using this model to create a work inspired by their NFT and sell it as NFT, please consider paying them a royalty to help the CC0 NFT community grow.
## License
[Creative Open-Rail++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
❗❗ “Mitsua Diffusion CC0” means most of the training data is CC0. **the model license itself is NOT CC0**.❗❗
This model is open access and available to all, with a CreativeML OpenRAIL++-M license further specifying rights and usage. The CreativeML OpenRAIL++-M License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL++-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
## Developed by
- Stable Diffusion 2.1: Robin Rombach, Patrick Esser
- Mitsua Diffusion CC0 : Abstract Engine dev team
|
AdapterHub/bert-base-uncased-pf-quoref | [
"bert",
"en",
"dataset:quoref",
"arxiv:2104.08247",
"adapter-transformers",
"question-answering"
]
| question-answering | {
"architectures": null,
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: openai/whisper-large-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-large-v2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7993
- Wer: 21.2788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- training_steps: 800
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.007 | 8.33 | 100 | 0.5728 | 21.4885 |
| 0.0007 | 16.67 | 200 | 0.7017 | 22.1174 |
| 0.0003 | 25.0 | 300 | 0.7358 | 21.5933 |
| 0.0002 | 33.33 | 400 | 0.7598 | 21.5933 |
| 0.0002 | 41.67 | 500 | 0.7793 | 22.0126 |
| 0.0001 | 50.0 | 600 | 0.7896 | 22.0126 |
| 0.0001 | 58.33 | 700 | 0.7969 | 21.2788 |
| 0.0001 | 66.67 | 800 | 0.7993 | 21.2788 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
AdapterHub/roberta-base-pf-race | [
"roberta",
"en",
"dataset:race",
"arxiv:2104.08247",
"adapter-transformers",
"adapterhub:rc/race"
]
| null | {
"architectures": null,
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.86 +/- 18.03
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AdapterHub/roberta-base-pf-social_i_qa | [
"roberta",
"en",
"dataset:social_i_qa",
"arxiv:2104.08247",
"adapter-transformers"
]
| null | {
"architectures": null,
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/luncdao/1671725263459/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1595722922185965568/lddrKTn5_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🔥⚛️ 𝕄𝔼ℝ𝔾𝔼 𝔻𝔸𝕆 ⚛️🔥</div>
<div style="text-align: center; font-size: 14px;">@luncdao</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🔥⚛️ 𝕄𝔼ℝ𝔾𝔼 𝔻𝔸𝕆 ⚛️🔥.
| Data | 🔥⚛️ 𝕄𝔼ℝ𝔾𝔼 𝔻𝔸𝕆 ⚛️🔥 |
| --- | --- |
| Tweets downloaded | 3217 |
| Retweets | 462 |
| Short tweets | 358 |
| Tweets kept | 2397 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1xrp9d61/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @luncdao's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2i4u19uf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2i4u19uf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/luncdao')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AdapterHub/roberta-base-pf-wnut_17 | [
"roberta",
"en",
"dataset:wnut_17",
"arxiv:2104.08247",
"adapter-transformers",
"token-classification"
]
| token-classification | {
"architectures": null,
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 245.85 +/- 22.56
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Aidan8756/stephenKingModel | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: sdcid
---
### 8a4b41d6-d132-4a09-8d3d-9b0d7fb19618 Dreambooth model trained by tzvc with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
sdcid (use that on your prompt)

|
AidenGO/KDXF_Bert4MaskedLM | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### saqib_sarahkhan_t350-u4000-11-21-pm Dreambooth model trained by imjunaidafzal with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
Akaramhuggingface/News | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/blockchainu-dsocialcommons-schwentker/1671735857548/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1486515700185251840/pU5Mrrs8_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1420237912776404995/mTTyXl3S_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/541637963944177664/X6E_IJtk_400x400.png')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Robert Schwentker & dSocialCommons & BlockchainUniversity</div>
<div style="text-align: center; font-size: 14px;">@blockchainu-dsocialcommons-schwentker</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Robert Schwentker & dSocialCommons & BlockchainUniversity.
| Data | Robert Schwentker | dSocialCommons | BlockchainUniversity |
| --- | --- | --- | --- |
| Tweets downloaded | 3203 | 690 | 1257 |
| Retweets | 2030 | 248 | 727 |
| Short tweets | 40 | 14 | 11 |
| Tweets kept | 1133 | 428 | 519 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1fiv3x6o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @blockchainu-dsocialcommons-schwentker's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/16q2xe59) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/16q2xe59/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/blockchainu-dsocialcommons-schwentker')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Akari/albert-base-v2-finetuned-squad | [
"pytorch",
"tensorboard",
"albert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.04 +/- 12.82
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Akash7897/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### sci-fi-landscape- Dreambooth model trained by Joeythemonster with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
Akash7897/test-clm | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta_emo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_emo
This model is a fine-tuned version of [ibm/ColD-Fusion](https://huggingface.co/ibm/ColD-Fusion) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
## Model Recycling
[Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=2.24&mnli_lp=nan&20_newsgroup=0.54&ag_news=0.46&amazon_reviews_multi=-0.50&anli=1.81&boolq=2.93&cb=21.52&cola=-0.12&copa=22.30&dbpedia=0.20&esnli=-0.30&financial_phrasebank=0.99&imdb=-0.12&isear=0.54&mnli=-0.16&mrpc=0.37&multirc=2.85&poem_sentiment=4.52&qnli=0.47&qqp=0.24&rotten_tomatoes=2.95&rte=10.99&sst2=1.64&sst_5bins=0.79&stsb=1.59&trec_coarse=0.09&trec_fine=3.44&tweet_ev_emoji=-0.31&tweet_ev_emotion=0.65&tweet_ev_hate=-0.40&tweet_ev_irony=4.08&tweet_ev_offensive=2.08&tweet_ev_sentiment=-0.16&wic=3.02&wnli=-8.31&wsc=0.19&yahoo_answers=-0.14&model_name=gustavecortal%2Froberta_emo&base_name=roberta-base) using gustavecortal/roberta_emo as a base model yields average score of 78.47 in comparison to 76.22 by roberta-base.
The model is ranked 2nd among all tested models for the roberta-base architecture as of 18/01/2023
Results:
| 20_newsgroup | ag_news | amazon_reviews_multi | anli | boolq | cb | cola | copa | dbpedia | esnli | financial_phrasebank | imdb | isear | mnli | mrpc | multirc | poem_sentiment | qnli | qqp | rotten_tomatoes | rte | sst2 | sst_5bins | stsb | trec_coarse | trec_fine | tweet_ev_emoji | tweet_ev_emotion | tweet_ev_hate | tweet_ev_irony | tweet_ev_offensive | tweet_ev_sentiment | wic | wnli | wsc | yahoo_answers |
|---------------:|----------:|-----------------------:|--------:|--------:|--------:|--------:|-------:|----------:|--------:|-----------------------:|-------:|--------:|--------:|--------:|----------:|-----------------:|--------:|--------:|------------------:|--------:|--------:|------------:|--------:|--------------:|------------:|-----------------:|-------------------:|----------------:|-----------------:|---------------------:|---------------------:|--------:|--------:|--------:|----------------:|
| 85.8205 | 90.2333 | 66.08 | 52.1563 | 81.6208 | 89.2857 | 83.4132 | 71 | 77.5 | 90.6963 | 86.1 | 93.776 | 73.0117 | 86.8186 | 88.2353 | 64.0677 | 88.4615 | 92.8794 | 90.9523 | 91.3696 | 83.3935 | 95.7569 | 57.4661 | 91.5106 | 97.2 | 91.2 | 45.994 | 82.4771 | 52.4916 | 75.6378 | 86.6279 | 70.8727 | 68.4953 | 46.4789 | 63.4615 | 72.2667 |
For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)
|
Akashamba/distilbert-base-uncased-finetuned-ner | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.90 +/- 15.39
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Akashpb13/Galician_xlsr | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"gl",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- wer
model-index:
- name: model_broadclass_onSet0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_broadclass_onSet0.1
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1129
- 0 Precision: 1.0
- 0 Recall: 1.0
- 0 F1-score: 1.0
- 0 Support: 31
- 1 Precision: 0.9259
- 1 Recall: 1.0
- 1 F1-score: 0.9615
- 1 Support: 25
- 2 Precision: 1.0
- 2 Recall: 0.9259
- 2 F1-score: 0.9615
- 2 Support: 27
- 3 Precision: 1.0
- 3 Recall: 1.0
- 3 F1-score: 1.0
- 3 Support: 15
- Accuracy: 0.9796
- Macro avg Precision: 0.9815
- Macro avg Recall: 0.9815
- Macro avg F1-score: 0.9808
- Macro avg Support: 98
- Weighted avg Precision: 0.9811
- Weighted avg Recall: 0.9796
- Weighted avg F1-score: 0.9796
- Weighted avg Support: 98
- Wer: 0.0859
- Mtrix: [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 2, 25, 0], [3, 0, 0, 0, 15]]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 80
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | 0 Precision | 0 Recall | 0 F1-score | 0 Support | 1 Precision | 1 Recall | 1 F1-score | 1 Support | 2 Precision | 2 Recall | 2 F1-score | 2 Support | 3 Precision | 3 Recall | 3 F1-score | 3 Support | Accuracy | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Macro avg Support | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score | Weighted avg Support | Wer | Mtrix |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:--------:|:-------------------:|:----------------:|:------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------------:|:--------------------:|:------:|:---------------------------------------------------------------------------------------:|
| 2.343 | 4.16 | 100 | 2.2083 | 0.3163 | 1.0 | 0.4806 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.0 | 0.0 | 0.0 | 27 | 0.0 | 0.0 | 0.0 | 15 | 0.3163 | 0.0791 | 0.25 | 0.1202 | 98 | 0.1001 | 0.3163 | 0.1520 | 98 | 0.9847 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 25, 0, 0, 0], [2, 27, 0, 0, 0], [3, 15, 0, 0, 0]] |
| 2.2769 | 8.33 | 200 | 2.1649 | 0.3163 | 1.0 | 0.4806 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.0 | 0.0 | 0.0 | 27 | 0.0 | 0.0 | 0.0 | 15 | 0.3163 | 0.0791 | 0.25 | 0.1202 | 98 | 0.1001 | 0.3163 | 0.1520 | 98 | 0.9847 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 25, 0, 0, 0], [2, 27, 0, 0, 0], [3, 15, 0, 0, 0]] |
| 1.9687 | 12.49 | 300 | 1.8723 | 0.3163 | 1.0 | 0.4806 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.0 | 0.0 | 0.0 | 27 | 0.0 | 0.0 | 0.0 | 15 | 0.3163 | 0.0791 | 0.25 | 0.1202 | 98 | 0.1001 | 0.3163 | 0.1520 | 98 | 0.9847 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 25, 0, 0, 0], [2, 27, 0, 0, 0], [3, 15, 0, 0, 0]] |
| 1.8046 | 16.65 | 400 | 1.6982 | 0.3163 | 1.0 | 0.4806 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.0 | 0.0 | 0.0 | 27 | 0.0 | 0.0 | 0.0 | 15 | 0.3163 | 0.0791 | 0.25 | 0.1202 | 98 | 0.1001 | 0.3163 | 0.1520 | 98 | 0.9847 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 25, 0, 0, 0], [2, 27, 0, 0, 0], [3, 15, 0, 0, 0]] |
| 1.5645 | 20.82 | 500 | 1.5862 | 0.3163 | 1.0 | 0.4806 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.0 | 0.0 | 0.0 | 27 | 0.0 | 0.0 | 0.0 | 15 | 0.3163 | 0.0791 | 0.25 | 0.1202 | 98 | 0.1001 | 0.3163 | 0.1520 | 98 | 0.9847 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 25, 0, 0, 0], [2, 27, 0, 0, 0], [3, 15, 0, 0, 0]] |
| 1.5322 | 24.98 | 600 | 1.5736 | 0.3163 | 1.0 | 0.4806 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.0 | 0.0 | 0.0 | 27 | 0.0 | 0.0 | 0.0 | 15 | 0.3163 | 0.0791 | 0.25 | 0.1202 | 98 | 0.1001 | 0.3163 | 0.1520 | 98 | 0.9847 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 25, 0, 0, 0], [2, 27, 0, 0, 0], [3, 15, 0, 0, 0]] |
| 1.5468 | 29.16 | 700 | 1.4736 | 0.3163 | 1.0 | 0.4806 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.0 | 0.0 | 0.0 | 27 | 0.0 | 0.0 | 0.0 | 15 | 0.3163 | 0.0791 | 0.25 | 0.1202 | 98 | 0.1001 | 0.3163 | 0.1520 | 98 | 0.9847 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 25, 0, 0, 0], [2, 27, 0, 0, 0], [3, 15, 0, 0, 0]] |
| 1.0542 | 33.33 | 800 | 1.0068 | 0.3163 | 1.0 | 0.4806 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.0 | 0.0 | 0.0 | 27 | 0.0 | 0.0 | 0.0 | 15 | 0.3163 | 0.0791 | 0.25 | 0.1202 | 98 | 0.1001 | 0.3163 | 0.1520 | 98 | 0.9847 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 25, 0, 0, 0], [2, 27, 0, 0, 0], [3, 15, 0, 0, 0]] |
| 0.9664 | 37.49 | 900 | 0.9831 | 0.3483 | 1.0 | 0.5167 | 31 | 1.0 | 0.12 | 0.2143 | 25 | 1.0 | 0.0370 | 0.0714 | 27 | 0.8 | 0.2667 | 0.4 | 15 | 0.3980 | 0.7871 | 0.3559 | 0.3006 | 98 | 0.7632 | 0.3980 | 0.2990 | 98 | 0.9758 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 21, 3, 0, 1], [2, 26, 0, 1, 0], [3, 11, 0, 0, 4]] |
| 0.9405 | 41.65 | 1000 | 0.9402 | 0.3827 | 1.0 | 0.5536 | 31 | 1.0 | 0.04 | 0.0769 | 25 | 1.0 | 0.4815 | 0.65 | 27 | 1.0 | 0.2 | 0.3333 | 15 | 0.4898 | 0.8457 | 0.4304 | 0.4035 | 98 | 0.8047 | 0.4898 | 0.4248 | 98 | 0.9630 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 24, 1, 0, 0], [2, 14, 0, 13, 0], [3, 12, 0, 0, 3]] |
| 0.9341 | 45.82 | 1100 | 0.9330 | 0.5082 | 1.0 | 0.6739 | 31 | 0.9231 | 0.48 | 0.6316 | 25 | 1.0 | 0.6296 | 0.7727 | 27 | 0.8571 | 0.4 | 0.5455 | 15 | 0.6735 | 0.8221 | 0.6274 | 0.6559 | 98 | 0.8029 | 0.6735 | 0.6707 | 98 | 0.9497 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 12, 12, 0, 1], [2, 9, 1, 17, 0], [3, 9, 0, 0, 6]] |
| 0.8769 | 49.98 | 1200 | 0.8662 | 0.6327 | 1.0 | 0.775 | 31 | 0.9565 | 0.88 | 0.9167 | 25 | 1.0 | 0.6296 | 0.7727 | 27 | 0.8889 | 0.5333 | 0.6667 | 15 | 0.7959 | 0.8695 | 0.7607 | 0.7828 | 98 | 0.8557 | 0.7959 | 0.7939 | 98 | 0.9442 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 2, 22, 0, 1], [2, 9, 1, 17, 0], [3, 7, 0, 0, 8]] |
| 0.8122 | 54.16 | 1300 | 0.7951 | 0.9062 | 0.9355 | 0.9206 | 31 | 0.8519 | 0.92 | 0.8846 | 25 | 1.0 | 0.8519 | 0.92 | 27 | 0.9375 | 1.0 | 0.9677 | 15 | 0.9184 | 0.9239 | 0.9268 | 0.9232 | 98 | 0.9230 | 0.9184 | 0.9185 | 98 | 0.9348 | [[0, 1, 2, 3], [0, 29, 2, 0, 0], [1, 1, 23, 0, 1], [2, 2, 2, 23, 0], [3, 0, 0, 0, 15]] |
| 0.5747 | 58.33 | 1400 | 0.4843 | 1.0 | 1.0 | 1.0 | 31 | 0.96 | 0.96 | 0.96 | 25 | 1.0 | 0.9630 | 0.9811 | 27 | 0.9375 | 1.0 | 0.9677 | 15 | 0.9796 | 0.9744 | 0.9807 | 0.9772 | 98 | 0.9802 | 0.9796 | 0.9797 | 98 | 0.6732 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 24, 0, 1], [2, 0, 1, 26, 0], [3, 0, 0, 0, 15]] |
| 0.2794 | 62.49 | 1500 | 0.2062 | 1.0 | 1.0 | 1.0 | 31 | 0.96 | 0.96 | 0.96 | 25 | 1.0 | 0.9630 | 0.9811 | 27 | 0.9375 | 1.0 | 0.9677 | 15 | 0.9796 | 0.9744 | 0.9807 | 0.9772 | 98 | 0.9802 | 0.9796 | 0.9797 | 98 | 0.2236 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 24, 0, 1], [2, 0, 1, 26, 0], [3, 0, 0, 0, 15]] |
| 0.1654 | 66.65 | 1600 | 0.1573 | 1.0 | 0.9677 | 0.9836 | 31 | 0.9259 | 1.0 | 0.9615 | 25 | 1.0 | 0.9630 | 0.9811 | 27 | 1.0 | 1.0 | 1.0 | 15 | 0.9796 | 0.9815 | 0.9827 | 0.9816 | 98 | 0.9811 | 0.9796 | 0.9798 | 98 | 0.1303 | [[0, 1, 2, 3], [0, 30, 1, 0, 0], [1, 0, 25, 0, 0], [2, 0, 1, 26, 0], [3, 0, 0, 0, 15]] |
| 0.1092 | 70.82 | 1700 | 0.1451 | 1.0 | 0.9677 | 0.9836 | 31 | 0.8889 | 0.96 | 0.9231 | 25 | 1.0 | 0.9259 | 0.9615 | 27 | 0.9375 | 1.0 | 0.9677 | 15 | 0.9592 | 0.9566 | 0.9634 | 0.9590 | 98 | 0.9621 | 0.9592 | 0.9597 | 98 | 0.1056 | [[0, 1, 2, 3], [0, 30, 1, 0, 0], [1, 0, 24, 0, 1], [2, 0, 2, 25, 0], [3, 0, 0, 0, 15]] |
| 0.085 | 74.98 | 1800 | 0.1126 | 1.0 | 1.0 | 1.0 | 31 | 0.9259 | 1.0 | 0.9615 | 25 | 1.0 | 0.9259 | 0.9615 | 27 | 1.0 | 1.0 | 1.0 | 15 | 0.9796 | 0.9815 | 0.9815 | 0.9808 | 98 | 0.9811 | 0.9796 | 0.9796 | 98 | 0.0938 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 2, 25, 0], [3, 0, 0, 0, 15]] |
| 0.0824 | 79.16 | 1900 | 0.1118 | 1.0 | 1.0 | 1.0 | 31 | 0.9259 | 1.0 | 0.9615 | 25 | 1.0 | 0.9259 | 0.9615 | 27 | 1.0 | 1.0 | 1.0 | 15 | 0.9796 | 0.9815 | 0.9815 | 0.9808 | 98 | 0.9811 | 0.9796 | 0.9796 | 98 | 0.0859 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 2, 25, 0], [3, 0, 0, 0, 15]] |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Akashpb13/Hausa_xlsr | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ha",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index",
"has_space"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 488.00 +/- 121.82
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga J4F4N4F -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga J4F4N4F -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga J4F4N4F
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 500000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Akashpb13/xlsr_hungarian_new | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hu",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: leidirocha
---
### leidirocha Dreambooth model trained by Babivill with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
leidirocha (use that on your prompt)

|
AkshaySg/langid | [
"multilingual",
"dataset:VoxLingua107",
"speechbrain",
"audio-classification",
"embeddings",
"Language",
"Identification",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"VoxLingua107",
"license:apache-2.0"
]
| audio-classification | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 622.00 +/- 199.96
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kRo0T -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kRo0T -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kRo0T
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
AlanDev/DallEMiniButBetter | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Copilot_for_poors
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Copilot_for_poors
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 65 | 2.9425 |
| No log | 2.0 | 130 | 2.5889 |
| No log | 3.0 | 195 | 2.3886 |
| No log | 4.0 | 260 | 2.2679 |
| No log | 5.0 | 325 | 2.1732 |
| No log | 6.0 | 390 | 2.0942 |
| No log | 7.0 | 455 | 2.0343 |
| 2.7389 | 8.0 | 520 | 1.9956 |
| 2.7389 | 9.0 | 585 | 1.9557 |
| 2.7389 | 10.0 | 650 | 1.9284 |
| 2.7389 | 11.0 | 715 | 1.9024 |
| 2.7389 | 12.0 | 780 | 1.8811 |
| 2.7389 | 13.0 | 845 | 1.8612 |
| 2.7389 | 14.0 | 910 | 1.8443 |
| 2.7389 | 15.0 | 975 | 1.8331 |
| 2.1064 | 16.0 | 1040 | 1.8228 |
| 2.1064 | 17.0 | 1105 | 1.8178 |
| 2.1064 | 18.0 | 1170 | 1.8130 |
| 2.1064 | 19.0 | 1235 | 1.8093 |
| 2.1064 | 20.0 | 1300 | 1.8084 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.