Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
TitanML/tiny-jamba
| null |
[
"transformers",
"safetensors",
"jamba",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:30:48+00:00
|
null | null |
{}
|
ke-lly/45509326_1
| null |
[
"region:us"
] | null |
2024-04-23T21:31:53+00:00
|
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-1.1b-chat-dpo-qlora
This model is a fine-tuned version of [martimfasantos/tinyllama-1.1b-chat-sft-qlora](https://huggingface.co/martimfasantos/tinyllama-1.1b-chat-sft-qlora) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6084
- Rewards/chosen: -1.0875
- Rewards/rejected: -1.3916
- Rewards/accuracies: 0.6580
- Rewards/margins: 0.3041
- Logps/rejected: -490.8393
- Logps/chosen: -504.9714
- Logits/rejected: -2.6096
- Logits/chosen: -2.6425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6921 | 0.03 | 100 | 0.6923 | 0.0160 | 0.0142 | 0.5645 | 0.0018 | -350.2683 | -394.6286 | -2.7841 | -2.8363 |
| 0.6894 | 0.05 | 200 | 0.6894 | 0.0433 | 0.0353 | 0.5920 | 0.0080 | -348.1495 | -391.8949 | -2.7811 | -2.8333 |
| 0.6815 | 0.08 | 300 | 0.6844 | 0.0806 | 0.0609 | 0.6025 | 0.0196 | -345.5898 | -388.1692 | -2.7838 | -2.8349 |
| 0.6869 | 0.1 | 400 | 0.6788 | 0.0607 | 0.0269 | 0.6125 | 0.0339 | -348.9979 | -390.1522 | -2.7931 | -2.8423 |
| 0.6744 | 0.13 | 500 | 0.6724 | 0.0243 | -0.0249 | 0.6210 | 0.0492 | -354.1764 | -393.7983 | -2.7889 | -2.8371 |
| 0.6679 | 0.16 | 600 | 0.6625 | -0.0566 | -0.1346 | 0.6265 | 0.0780 | -365.1402 | -401.8826 | -2.7709 | -2.8179 |
| 0.637 | 0.18 | 700 | 0.6555 | -0.2568 | -0.3654 | 0.6290 | 0.1086 | -388.2211 | -421.9038 | -2.7596 | -2.8051 |
| 0.6166 | 0.21 | 800 | 0.6488 | -0.3935 | -0.5223 | 0.6320 | 0.1288 | -403.9116 | -435.5756 | -2.7523 | -2.7961 |
| 0.6335 | 0.24 | 900 | 0.6458 | -0.4516 | -0.6042 | 0.6380 | 0.1527 | -412.1083 | -441.3798 | -2.7325 | -2.7764 |
| 0.6286 | 0.26 | 1000 | 0.6406 | -0.8692 | -1.0442 | 0.625 | 0.1750 | -456.1026 | -483.1429 | -2.7123 | -2.7531 |
| 0.669 | 0.29 | 1100 | 0.6406 | -0.3445 | -0.4984 | 0.6365 | 0.1538 | -401.5222 | -430.6789 | -2.6946 | -2.7354 |
| 0.6723 | 0.31 | 1200 | 0.6358 | -0.4619 | -0.6430 | 0.6425 | 0.1811 | -415.9841 | -442.4163 | -2.6701 | -2.7077 |
| 0.605 | 0.34 | 1300 | 0.6297 | -0.6894 | -0.8903 | 0.6435 | 0.2009 | -440.7144 | -465.1627 | -2.6764 | -2.7122 |
| 0.6361 | 0.37 | 1400 | 0.6267 | -0.7144 | -0.9307 | 0.6505 | 0.2163 | -444.7496 | -467.6648 | -2.6711 | -2.7091 |
| 0.6085 | 0.39 | 1500 | 0.6213 | -1.0532 | -1.3084 | 0.6490 | 0.2552 | -482.5256 | -501.5469 | -2.6435 | -2.6797 |
| 0.6317 | 0.42 | 1600 | 0.6197 | -1.1246 | -1.3825 | 0.6490 | 0.2579 | -489.9323 | -508.6858 | -2.6172 | -2.6506 |
| 0.6702 | 0.44 | 1700 | 0.6182 | -1.0036 | -1.2644 | 0.6530 | 0.2609 | -478.1268 | -496.5815 | -2.6407 | -2.6762 |
| 0.5658 | 0.47 | 1800 | 0.6219 | -1.3479 | -1.6348 | 0.6445 | 0.2869 | -515.1606 | -531.0145 | -2.5866 | -2.6182 |
| 0.6039 | 0.5 | 1900 | 0.6154 | -0.9014 | -1.1716 | 0.6630 | 0.2702 | -468.8458 | -486.3656 | -2.6376 | -2.6742 |
| 0.6173 | 0.52 | 2000 | 0.6121 | -1.1535 | -1.4470 | 0.6575 | 0.2934 | -496.3810 | -511.5793 | -2.6232 | -2.6580 |
| 0.62 | 0.55 | 2100 | 0.6116 | -1.1600 | -1.4523 | 0.6650 | 0.2923 | -496.9117 | -512.2247 | -2.6278 | -2.6629 |
| 0.5957 | 0.58 | 2200 | 0.6132 | -0.9592 | -1.2431 | 0.6655 | 0.2839 | -475.9958 | -492.1489 | -2.6317 | -2.6674 |
| 0.6093 | 0.6 | 2300 | 0.6138 | -1.0935 | -1.3811 | 0.6625 | 0.2876 | -489.7906 | -505.5738 | -2.6283 | -2.6619 |
| 0.6009 | 0.63 | 2400 | 0.6108 | -1.0519 | -1.3479 | 0.6610 | 0.2959 | -486.4695 | -501.4175 | -2.6088 | -2.6432 |
| 0.5988 | 0.65 | 2500 | 0.6108 | -1.0427 | -1.3419 | 0.6590 | 0.2992 | -485.8730 | -500.4982 | -2.6143 | -2.6477 |
| 0.606 | 0.68 | 2600 | 0.6112 | -1.0188 | -1.3192 | 0.6545 | 0.3003 | -483.6013 | -498.1078 | -2.5974 | -2.6304 |
| 0.6118 | 0.71 | 2700 | 0.6106 | -1.0808 | -1.3857 | 0.6595 | 0.3049 | -490.2562 | -504.3045 | -2.5945 | -2.6274 |
| 0.6134 | 0.73 | 2800 | 0.6096 | -1.1549 | -1.4635 | 0.6585 | 0.3086 | -498.0366 | -511.7179 | -2.5978 | -2.6303 |
| 0.6159 | 0.76 | 2900 | 0.6097 | -1.0550 | -1.3509 | 0.6585 | 0.2959 | -486.7739 | -501.7256 | -2.6175 | -2.6500 |
| 0.5815 | 0.79 | 3000 | 0.6091 | -1.1025 | -1.4048 | 0.6570 | 0.3023 | -492.1650 | -506.4727 | -2.6089 | -2.6420 |
| 0.5885 | 0.81 | 3100 | 0.6089 | -1.0977 | -1.4006 | 0.6595 | 0.3029 | -491.7444 | -505.9960 | -2.6001 | -2.6337 |
| 0.6074 | 0.84 | 3200 | 0.6086 | -1.0982 | -1.4029 | 0.6605 | 0.3047 | -491.9724 | -506.0455 | -2.6056 | -2.6388 |
| 0.5981 | 0.86 | 3300 | 0.6087 | -1.0853 | -1.3881 | 0.6610 | 0.3028 | -490.4915 | -504.7571 | -2.6117 | -2.6442 |
| 0.5944 | 0.89 | 3400 | 0.6087 | -1.0897 | -1.3931 | 0.6580 | 0.3034 | -490.9887 | -505.1947 | -2.6026 | -2.6360 |
| 0.5979 | 0.92 | 3500 | 0.6085 | -1.0922 | -1.3962 | 0.6595 | 0.3040 | -491.3070 | -505.4438 | -2.6136 | -2.6460 |
| 0.6154 | 0.94 | 3600 | 0.6086 | -1.0905 | -1.3946 | 0.6595 | 0.3040 | -491.1413 | -505.2781 | -2.6066 | -2.6397 |
| 0.6053 | 0.97 | 3700 | 0.6086 | -1.0907 | -1.3946 | 0.6550 | 0.3039 | -491.1405 | -505.2943 | -2.6094 | -2.6423 |
| 0.602 | 0.99 | 3800 | 0.6085 | -1.0876 | -1.3914 | 0.6580 | 0.3038 | -490.8211 | -504.9807 | -2.6096 | -2.6425 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "model-index": [{"name": "tinyllama-1.1b-chat-dpo-qlora", "results": []}]}
|
martimfasantos/tinyllama-1.1b-chat-dpo-qlora
| null |
[
"peft",
"tensorboard",
"safetensors",
"llama",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"license:apache-2.0",
"4-bit",
"region:us"
] | null |
2024-04-23T21:32:30+00:00
|
null | null |
{}
|
minindu-liya99/a2c-PandaReachDense-v3
| null |
[
"region:us"
] | null |
2024-04-23T21:36:04+00:00
|
|
text-generation
|
transformers
|
# GALAXY-16B-v1.0

## Technical notes
- 72 layers,DUS procedure, mistral(32)->SOLAR(48)->GALAXY(72)
- 16B parameters
- model created as a extension of depth upscaling procedure used for SOLAR by upstage
## Results
- model can and will produce NSFW content
- waiting for eval results
|
{"language": ["en"], "license": "apache-2.0", "tags": ["not-for-all-audiences"], "datasets": ["Intel/orca_dpo_pairs", "athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW", "Open-Orca/SlimOrca", "MinervaAI/Aesir-Preview", "allenai/ultrafeedback_binarized_cleaned"]}
|
TeeZee/GALAXY-16B-v1.0-bpw8.0-h8-exl2
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"conversational",
"en",
"dataset:Intel/orca_dpo_pairs",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW",
"dataset:Open-Orca/SlimOrca",
"dataset:MinervaAI/Aesir-Preview",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T21:36:10+00:00
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_2ep
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.5457
- eval_runtime: 2.9825
- eval_samples_per_second: 67.059
- eval_steps_per_second: 8.382
- epoch: 1.9968
- step: 156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1
|
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_2ep", "results": []}]}
|
mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_2ep
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null |
2024-04-23T21:37:00+00:00
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_3ep
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.2242
- eval_runtime: 2.8668
- eval_samples_per_second: 69.763
- eval_steps_per_second: 8.72
- epoch: 2.9952
- step: 234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1
|
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_3ep", "results": []}]}
|
mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_3ep
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null |
2024-04-23T21:37:16+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# v2-WtP-FT-12L-256BS-UD-Opus-cUD-cOpus
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1157
- Precision: 0.6058
- Recall: 0.73
- F1: 0.6621
- Threshold: 0.4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Threshold |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:---------:|
| No log | 0.59 | 250 | 0.0430 | 0.9104 | 0.915 | 0.9127 | 0.4 |
| No log | 0.59 | 250 | 0.0173 | 0.8413 | 0.875 | 0.8578 | 0.4 |
| No log | 0.59 | 250 | 0.0374 | 0.8814 | 0.855 | 0.8680 | 0.5 |
| No log | 0.59 | 250 | 0.0191 | 0.8539 | 0.935 | 0.8926 | 0.2 |
| No log | 0.59 | 250 | 0.0298 | 0.9391 | 0.925 | 0.9320 | 0.6 |
| No log | 0.59 | 250 | 0.0104 | 0.9755 | 0.995 | 0.9851 | 0.8 |
| No log | 0.59 | 250 | 0.0161 | 0.9391 | 0.9296 | 0.9343 | 0.6 |
| No log | 0.59 | 250 | 0.0104 | 0.9706 | 0.99 | 0.9802 | 0.7000 |
| No log | 0.59 | 250 | 0.0162 | 0.9387 | 0.995 | 0.9660 | 0.7000 |
| No log | 0.59 | 250 | 0.0376 | 0.9091 | 0.9 | 0.9045 | 0.5 |
| No log | 0.59 | 250 | 0.0119 | 0.9522 | 0.995 | 0.9731 | 0.6 |
| No log | 0.59 | 250 | 0.0178 | 0.9234 | 0.965 | 0.9438 | 0.8 |
| No log | 0.59 | 250 | 0.0089 | 0.9479 | 1.0 | 0.9732 | 0.3000 |
| No log | 0.59 | 250 | 0.0239 | 0.9299 | 0.995 | 0.9614 | 0.7000 |
| No log | 0.59 | 250 | 0.0165 | 0.9431 | 0.995 | 0.9684 | 0.5 |
| No log | 0.59 | 250 | 0.0118 | 0.9423 | 0.98 | 0.9608 | 0.6 |
| No log | 0.59 | 250 | 0.0166 | 0.95 | 0.9645 | 0.9572 | 0.9 |
| No log | 0.59 | 250 | 0.0153 | 0.9245 | 0.98 | 0.9515 | 0.6 |
| No log | 0.59 | 250 | 0.0529 | 0.9101 | 0.8141 | 0.8594 | 0.7000 |
| No log | 0.59 | 250 | 0.0183 | 0.9299 | 0.995 | 0.9614 | 0.7000 |
| No log | 0.59 | 250 | 0.0124 | 0.9249 | 0.985 | 0.9540 | 0.4 |
| No log | 0.59 | 250 | 0.0415 | 0.9505 | 0.96 | 0.9552 | 0.3000 |
| No log | 0.59 | 250 | 0.0060 | 0.9793 | 0.945 | 0.9618 | 0.7000 |
| No log | 0.59 | 250 | 0.0097 | 0.9552 | 0.9746 | 0.9648 | 0.4 |
| No log | 0.59 | 250 | 0.0221 | 0.9423 | 0.98 | 0.9608 | 0.6 |
| No log | 0.59 | 250 | 0.0602 | 0.8537 | 0.875 | 0.8642 | 0.4 |
| No log | 0.59 | 250 | 0.0082 | 0.9122 | 0.9397 | 0.9257 | 0.5 |
| No log | 0.59 | 250 | 0.0245 | 0.8884 | 0.995 | 0.9387 | 0.3000 |
| No log | 0.59 | 250 | 0.0221 | 0.9128 | 0.89 | 0.9013 | 0.6 |
| No log | 0.59 | 250 | 0.0159 | 0.9476 | 0.995 | 0.9707 | 0.4 |
| No log | 0.59 | 250 | 0.0345 | 0.8995 | 0.985 | 0.9403 | 0.064 |
| No log | 0.59 | 250 | 0.0259 | 0.9387 | 0.995 | 0.9660 | 0.6 |
| No log | 0.59 | 250 | 0.0154 | 0.9588 | 0.93 | 0.9442 | 0.5 |
| No log | 0.59 | 250 | 0.0115 | 0.9709 | 1.0 | 0.9852 | 0.5 |
| No log | 0.59 | 250 | 0.0104 | 0.975 | 0.975 | 0.975 | 0.7000 |
| No log | 0.59 | 250 | 0.0812 | 0.9123 | 0.78 | 0.8410 | 0.5 |
| No log | 0.59 | 250 | 0.0137 | 0.9375 | 0.975 | 0.9559 | 0.7000 |
| No log | 0.59 | 250 | 0.0257 | 0.9610 | 0.985 | 0.9728 | 0.062 |
| No log | 0.59 | 250 | 0.0739 | 0.8167 | 0.7387 | 0.7757 | 0.2 |
| No log | 0.59 | 250 | 0.0484 | 0.9275 | 0.8995 | 0.9133 | 0.3000 |
| No log | 0.59 | 250 | 0.0569 | 0.8267 | 0.93 | 0.8753 | 0.5 |
| No log | 0.59 | 250 | 0.0152 | 0.9265 | 0.945 | 0.9356 | 0.2 |
| No log | 0.59 | 250 | 0.0146 | 0.9801 | 0.985 | 0.9825 | 0.3000 |
| No log | 0.59 | 250 | 0.0058 | 0.9604 | 0.9749 | 0.9676 | 0.4 |
| No log | 0.59 | 250 | 0.0092 | 0.9686 | 0.925 | 0.9463 | 0.9 |
| No log | 0.59 | 250 | 0.0055 | 0.9747 | 0.965 | 0.9698 | 0.9 |
| No log | 0.59 | 250 | 0.0111 | 0.9524 | 1.0 | 0.9756 | 0.6 |
| No log | 0.59 | 250 | 0.0345 | 0.8884 | 0.955 | 0.9205 | 0.5 |
| No log | 0.59 | 250 | 0.0179 | 0.9852 | 1.0 | 0.9926 | 0.2 |
| No log | 0.59 | 250 | 0.0214 | 0.9517 | 0.985 | 0.9681 | 0.3000 |
| No log | 0.59 | 250 | 0.0188 | 0.9612 | 0.99 | 0.9754 | 0.8 |
| No log | 0.59 | 250 | 0.0075 | 0.9365 | 0.8985 | 0.9171 | 0.9 |
| No log | 0.59 | 250 | 0.0661 | 0.8122 | 0.8 | 0.8060 | 0.2 |
| No log | 0.59 | 250 | 0.0637 | 0.8495 | 0.875 | 0.8621 | 0.3000 |
| No log | 0.59 | 250 | 0.0137 | 0.9657 | 0.985 | 0.9752 | 0.9 |
| No log | 0.59 | 250 | 0.0154 | 0.9524 | 1.0 | 0.9756 | 0.3000 |
| No log | 0.59 | 250 | 0.1067 | 0.7964 | 0.88 | 0.8361 | 0.2 |
| No log | 0.59 | 250 | 0.0097 | 0.9522 | 0.995 | 0.9731 | 0.5 |
| No log | 0.59 | 250 | 0.1296 | 0.8382 | 0.855 | 0.8465 | 0.4 |
| No log | 0.59 | 250 | 0.0123 | 0.9524 | 1.0 | 0.9756 | 0.7000 |
| No log | 0.59 | 250 | 0.0092 | 0.9707 | 0.995 | 0.9827 | 0.4 |
| No log | 0.59 | 250 | 0.0073 | 0.9372 | 0.97 | 0.9533 | 0.7000 |
| No log | 0.59 | 250 | 0.0497 | 0.9055 | 0.91 | 0.9077 | 0.5 |
| No log | 0.59 | 250 | 0.0071 | 0.9706 | 0.99 | 0.9802 | 0.7000 |
| No log | 0.59 | 250 | 0.0119 | 0.9706 | 0.99 | 0.9802 | 0.9 |
| No log | 0.59 | 250 | 0.0136 | 0.9463 | 0.97 | 0.9580 | 0.9 |
| No log | 0.59 | 250 | 0.0165 | 0.9567 | 0.995 | 0.9755 | 0.2 |
| No log | 0.59 | 250 | 0.0083 | 0.9615 | 1.0 | 0.9804 | 0.6 |
| No log | 0.59 | 250 | 0.0331 | 0.9135 | 0.845 | 0.8779 | 0.4 |
| No log | 0.59 | 250 | 0.0670 | 0.8756 | 0.845 | 0.8601 | 0.4 |
| No log | 0.59 | 250 | 0.0113 | 0.9108 | 0.97 | 0.9395 | 0.3000 |
| No log | 0.59 | 250 | 0.0684 | 0.8018 | 0.87 | 0.8345 | 0.6 |
| No log | 0.59 | 250 | 0.0122 | 0.9476 | 0.995 | 0.9707 | 0.2 |
| No log | 0.59 | 250 | 0.0186 | 0.9245 | 0.98 | 0.9515 | 0.6 |
| No log | 0.59 | 250 | 0.0204 | 0.8585 | 0.88 | 0.8691 | 0.6 |
| No log | 0.59 | 250 | 0.0088 | 0.9479 | 0.91 | 0.9286 | 0.5 |
| No log | 0.59 | 250 | 0.0176 | 0.9346 | 1.0 | 0.9662 | 0.2 |
| No log | 0.59 | 250 | 0.0157 | 0.9529 | 0.91 | 0.9309 | 0.6 |
| No log | 0.59 | 250 | 0.0550 | 0.8720 | 0.92 | 0.8954 | 0.2 |
| No log | 0.59 | 250 | 0.0230 | 0.875 | 0.91 | 0.8922 | 0.4 |
| No log | 0.59 | 250 | 0.0322 | 0.8670 | 0.8889 | 0.8778 | 0.2 |
| No log | 0.59 | 250 | 0.0325 | 0.9630 | 0.91 | 0.9357 | 0.6 |
| No log | 0.59 | 250 | 0.1328 | 0.7940 | 0.79 | 0.7920 | 0.4 |
| No log | 0.59 | 250 | 0.0253 | 0.8267 | 0.835 | 0.8308 | 0.5 |
| No log | 0.59 | 250 | 0.0647 | 0.6867 | 0.855 | 0.7617 | 0.3000 |
| No log | 0.59 | 250 | 0.0258 | 0.7906 | 0.925 | 0.8525 | 0.3000 |
| No log | 0.59 | 250 | 0.0857 | 0.8333 | 0.8 | 0.8163 | 0.4 |
| No log | 0.59 | 250 | 0.0938 | 0.732 | 0.915 | 0.8133 | 0.3000 |
| No log | 0.59 | 250 | 0.0724 | 0.5541 | 0.4372 | 0.4888 | 0.4 |
| No log | 0.59 | 250 | 0.0525 | 0.7787 | 0.915 | 0.8414 | 0.3000 |
| No log | 0.59 | 250 | 0.0538 | 0.86 | 0.86 | 0.8600 | 0.6 |
| No log | 0.59 | 250 | 0.1075 | 0.7843 | 0.8 | 0.7921 | 0.4 |
| No log | 0.59 | 250 | 0.0536 | 0.7879 | 0.91 | 0.8445 | 0.4 |
| No log | 0.59 | 250 | 0.0341 | 0.8216 | 0.875 | 0.8475 | 0.5 |
| No log | 0.59 | 250 | 0.0674 | 0.7762 | 0.815 | 0.7951 | 0.5 |
| No log | 0.59 | 250 | 0.0671 | 0.9021 | 0.875 | 0.8883 | 0.7000 |
| No log | 0.59 | 250 | 0.0626 | 0.8969 | 0.87 | 0.8832 | 0.7000 |
| No log | 0.59 | 250 | 0.0498 | 0.8307 | 0.785 | 0.8072 | 0.6 |
| No log | 0.59 | 250 | 0.0419 | 0.7860 | 0.8492 | 0.8164 | 0.5 |
| No log | 0.59 | 250 | 0.0615 | 0.7732 | 0.75 | 0.7614 | 0.5 |
| No log | 0.59 | 250 | 0.0806 | 0.7124 | 0.83 | 0.7667 | 0.5 |
| No log | 0.59 | 250 | 0.0570 | 0.8381 | 0.88 | 0.8585 | 0.5 |
| No log | 0.59 | 250 | 0.0404 | 0.8602 | 0.8 | 0.8290 | 0.6 |
| No log | 0.59 | 250 | 0.1475 | 0.7015 | 0.94 | 0.8034 | 0.062 |
| No log | 0.59 | 250 | 0.0237 | 0.8466 | 0.8 | 0.8226 | 0.5 |
| No log | 0.59 | 250 | 0.0517 | 0.8020 | 0.8223 | 0.8120 | 0.4 |
| No log | 0.59 | 250 | 0.0732 | 0.8224 | 0.88 | 0.8502 | 0.5 |
| No log | 0.59 | 250 | 0.1005 | 0.6875 | 0.6633 | 0.6752 | 0.3000 |
| No log | 0.59 | 250 | 0.0285 | 0.7427 | 0.765 | 0.7537 | 0.4 |
| No log | 0.59 | 250 | 0.0934 | 0.6889 | 0.93 | 0.7915 | 0.4 |
| No log | 0.59 | 250 | 0.0430 | 0.7968 | 0.745 | 0.7700 | 0.5 |
| No log | 0.59 | 250 | 0.0675 | 0.805 | 0.805 | 0.805 | 0.5 |
| No log | 0.59 | 250 | 0.0738 | 0.9056 | 0.815 | 0.8579 | 0.6 |
| No log | 0.59 | 250 | 0.1196 | 0.7336 | 0.84 | 0.7832 | 0.5 |
| No log | 0.59 | 250 | 0.0812 | 0.6231 | 0.835 | 0.7137 | 0.2 |
| No log | 0.59 | 250 | 0.0760 | 0.7662 | 0.77 | 0.7681 | 0.5 |
| No log | 0.59 | 250 | 0.0524 | 0.7792 | 0.9045 | 0.8372 | 0.4 |
| No log | 0.59 | 250 | 0.1207 | 0.7711 | 0.775 | 0.7731 | 0.4 |
| No log | 0.59 | 250 | 0.0881 | 0.3414 | 0.565 | 0.4256 | 0.3000 |
| No log | 0.59 | 250 | 0.1086 | 0.8507 | 0.855 | 0.8529 | 0.3000 |
| No log | 0.59 | 250 | 0.1118 | 0.6136 | 0.6784 | 0.6444 | 0.1 |
| No log | 0.59 | 250 | 0.1151 | 0.8382 | 0.7286 | 0.7796 | 0.3000 |
| No log | 0.59 | 250 | 0.0918 | 0.7185 | 0.855 | 0.7808 | 0.4 |
| No log | 0.59 | 250 | 0.0311 | 0.8194 | 0.8939 | 0.8551 | 0.2 |
| No log | 0.59 | 250 | 0.0843 | 0.8372 | 0.9 | 0.8675 | 0.3000 |
| No log | 0.59 | 250 | 0.0297 | 0.8710 | 0.8141 | 0.8416 | 0.5 |
| No log | 0.59 | 250 | 0.0345 | 0.8245 | 0.775 | 0.7990 | 0.6 |
| No log | 0.59 | 250 | 0.0439 | 0.6682 | 0.705 | 0.6861 | 0.5 |
| No log | 0.59 | 250 | 0.0690 | 0.8221 | 0.855 | 0.8382 | 0.6 |
| No log | 0.59 | 250 | 0.0684 | 0.6849 | 0.75 | 0.7160 | 0.4 |
| No log | 0.59 | 250 | 0.0747 | 0.9130 | 0.945 | 0.9287 | 0.3000 |
| No log | 0.59 | 250 | 0.0890 | 0.8272 | 0.67 | 0.7403 | 0.5 |
| No log | 0.59 | 250 | 0.1415 | 0.7436 | 0.725 | 0.7342 | 0.6 |
| No log | 0.59 | 250 | 0.0252 | 0.7975 | 0.6332 | 0.7059 | 0.6 |
| No log | 0.59 | 250 | 0.0903 | 0.65 | 0.8492 | 0.7364 | 0.097 |
| No log | 0.59 | 250 | 0.1004 | 0.8342 | 0.83 | 0.8321 | 0.4 |
| No log | 0.59 | 250 | 0.0544 | 0.8136 | 0.895 | 0.8524 | 0.6 |
| No log | 0.59 | 250 | 0.0663 | 0.8738 | 0.9 | 0.8867 | 0.6 |
| No log | 0.59 | 250 | 0.1370 | 0.8219 | 0.6 | 0.6936 | 0.4 |
| No log | 0.59 | 250 | 0.0606 | 0.8122 | 0.865 | 0.8378 | 0.5 |
| No log | 0.59 | 250 | 0.1426 | 0.7008 | 0.89 | 0.7841 | 0.2 |
| No log | 0.59 | 250 | 0.0403 | 0.8089 | 0.91 | 0.8565 | 0.5 |
| No log | 0.59 | 250 | 0.0659 | 0.9157 | 0.76 | 0.8306 | 0.7000 |
| No log | 0.59 | 250 | 0.0170 | 0.8423 | 0.935 | 0.8863 | 0.5 |
| No log | 0.59 | 250 | 0.1061 | 0.8053 | 0.765 | 0.7846 | 0.6 |
| No log | 0.59 | 250 | 0.0421 | 0.8646 | 0.83 | 0.8469 | 0.7000 |
| No log | 0.59 | 250 | 0.0640 | 0.7650 | 0.895 | 0.8249 | 0.4 |
| No log | 0.59 | 250 | 0.0498 | 0.7900 | 0.865 | 0.8258 | 0.5 |
| No log | 0.59 | 250 | 0.0939 | 0.7689 | 0.815 | 0.7913 | 0.5 |
| No log | 0.59 | 250 | 0.0372 | 0.8632 | 0.915 | 0.8883 | 0.5 |
| No log | 0.59 | 250 | 0.0759 | 0.5760 | 0.625 | 0.5995 | 0.2 |
| No log | 0.59 | 250 | 0.1436 | 0.6419 | 0.69 | 0.6651 | 0.3000 |
| No log | 0.59 | 250 | 0.0303 | 0.8019 | 0.83 | 0.8157 | 0.3000 |
| No log | 0.59 | 250 | 0.0773 | 0.6996 | 0.92 | 0.7948 | 0.4 |
| No log | 0.59 | 250 | 0.0922 | 0.8462 | 0.825 | 0.8354 | 0.6 |
| No log | 0.59 | 250 | 0.0637 | 0.815 | 0.815 | 0.815 | 0.6 |
| No log | 0.59 | 250 | 0.0293 | 0.8028 | 0.855 | 0.8281 | 0.6 |
| No log | 0.59 | 250 | 0.0186 | 0.8302 | 0.88 | 0.8544 | 0.3000 |
| No log | 0.59 | 250 | 0.1214 | 0.7610 | 0.78 | 0.7704 | 0.6 |
| No log | 0.59 | 250 | 0.0634 | 0.6735 | 0.66 | 0.6667 | 0.4 |
| No log | 0.59 | 250 | 0.0853 | 0.8491 | 0.9 | 0.8738 | 0.3000 |
| No log | 0.59 | 250 | 0.1008 | 0.4034 | 0.71 | 0.5145 | 0.075 |
| No log | 0.59 | 250 | 0.0388 | 0.8586 | 0.8283 | 0.8432 | 0.4 |
| No log | 0.59 | 250 | 0.0895 | 0.7566 | 0.855 | 0.8028 | 0.2 |
| No log | 0.59 | 250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.007 |
| No log | 0.59 | 250 | 0.0171 | 0.6667 | 0.9239 | 0.7745 | 0.5 |
| No log | 0.59 | 250 | 0.0055 | 0.8844 | 0.995 | 0.9365 | 0.2 |
| No log | 0.59 | 250 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.042 |
| No log | 0.59 | 250 | 0.0074 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 0.59 | 250 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.08 |
| No log | 0.59 | 250 | 0.0035 | 0.9947 | 1.0 | 0.9973 | 0.4 |
| No log | 0.59 | 250 | 0.0029 | 0.9755 | 0.995 | 0.9851 | 0.3000 |
| No log | 0.59 | 250 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.032 |
| No log | 0.59 | 250 | 0.0025 | 0.9900 | 0.995 | 0.9925 | 0.6 |
| No log | 0.59 | 250 | 0.0020 | 1.0 | 1.0 | 1.0 | 0.7000 |
| No log | 0.59 | 250 | 0.0071 | 0.9655 | 0.98 | 0.9727 | 0.024 |
| No log | 0.59 | 250 | 0.0123 | 0.9946 | 0.915 | 0.9531 | 0.5 |
| No log | 0.59 | 250 | 0.0009 | 1.0 | 1.0 | 1.0 | 0.7000 |
| No log | 0.59 | 250 | 0.0166 | 0.9945 | 0.91 | 0.9504 | 0.6 |
| No log | 0.59 | 250 | 0.0016 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 0.59 | 250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.007 |
| No log | 0.59 | 250 | 0.0040 | 0.9949 | 0.985 | 0.9899 | 0.3000 |
| No log | 0.59 | 250 | 0.0014 | 0.995 | 0.995 | 0.995 | 0.6 |
| No log | 0.59 | 250 | 0.0055 | 0.9524 | 1.0 | 0.9756 | 0.5 |
| No log | 0.59 | 250 | 0.0409 | 0.8230 | 0.86 | 0.8411 | 0.5 |
| No log | 0.59 | 250 | 0.0007 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 0.59 | 250 | 0.0030 | 0.9899 | 0.98 | 0.9849 | 0.3000 |
| No log | 0.59 | 250 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 0.59 | 250 | 0.0015 | 0.9900 | 0.995 | 0.9925 | 0.6 |
| No log | 0.59 | 250 | 0.0017 | 0.995 | 0.995 | 0.995 | 0.3000 |
| No log | 0.59 | 250 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 0.59 | 250 | 0.0048 | 0.9512 | 0.975 | 0.9630 | 0.5 |
| No log | 0.59 | 250 | 0.0008 | 1.0 | 0.995 | 0.9975 | 0.7000 |
| No log | 0.59 | 250 | 0.0132 | 0.9897 | 0.96 | 0.9746 | 0.2 |
| No log | 0.59 | 250 | 0.0008 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 0.59 | 250 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.011 |
| No log | 0.59 | 250 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 0.59 | 250 | 0.0037 | 0.995 | 0.995 | 0.995 | 0.5 |
| No log | 0.59 | 250 | 0.0020 | 0.9852 | 1.0 | 0.9926 | 0.3000 |
| No log | 0.59 | 250 | 0.0013 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 0.59 | 250 | 0.0039 | 0.9792 | 1.0 | 0.9895 | 0.4 |
| No log | 0.59 | 250 | 0.0045 | 0.9206 | 0.985 | 0.9517 | 0.2 |
| No log | 0.59 | 250 | 0.0011 | 1.0 | 1.0 | 1.0 | 0.8 |
| No log | 0.59 | 250 | 0.0027 | 0.9756 | 1.0 | 0.9877 | 0.0520 |
| No log | 0.59 | 250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.003 |
| No log | 0.59 | 250 | 0.0032 | 0.9851 | 0.995 | 0.9900 | 0.3000 |
| No log | 0.59 | 250 | 0.0024 | 0.9899 | 0.985 | 0.9875 | 0.8 |
| No log | 0.59 | 250 | 0.0192 | 0.9340 | 0.9293 | 0.9316 | 0.8 |
| No log | 0.59 | 250 | 0.0008 | 1.0 | 1.0 | 1.0 | 0.8 |
| No log | 0.59 | 250 | 0.0046 | 0.9706 | 0.99 | 0.9802 | 0.7000 |
| No log | 0.59 | 250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.006 |
| No log | 0.59 | 250 | 0.0007 | 0.9950 | 1.0 | 0.9975 | 0.7000 |
| No log | 0.59 | 250 | 0.0142 | 0.9431 | 0.995 | 0.9684 | 0.029 |
| No log | 0.59 | 250 | 0.0014 | 0.9917 | 1.0 | 0.9959 | 0.04 |
| No log | 0.59 | 250 | 0.0150 | 0.9418 | 0.89 | 0.9152 | 0.6 |
| No log | 0.59 | 250 | 0.0078 | 0.9901 | 1.0 | 0.9950 | 0.029 |
| No log | 0.59 | 250 | 0.0021 | 0.9851 | 0.99 | 0.9875 | 0.5 |
| No log | 0.59 | 250 | 0.0022 | 0.9901 | 1.0 | 0.9950 | 0.7000 |
| No log | 0.59 | 250 | 0.0021 | 1.0 | 0.995 | 0.9975 | 0.8 |
| No log | 0.59 | 250 | 0.0097 | 0.8083 | 0.97 | 0.8818 | 0.4 |
| No log | 0.59 | 250 | 0.0048 | 1.0 | 0.99 | 0.9950 | 0.028 |
| No log | 0.59 | 250 | 0.0061 | 0.9898 | 0.975 | 0.9824 | 0.7000 |
| No log | 0.59 | 250 | 0.0177 | 0.7562 | 0.7716 | 0.7638 | 0.7000 |
| No log | 0.59 | 250 | 0.0086 | 0.8899 | 0.97 | 0.9282 | 0.3000 |
| No log | 0.59 | 250 | 0.0414 | 0.9333 | 0.84 | 0.8842 | 0.6 |
| No log | 0.59 | 250 | 0.1791 | 0.4894 | 0.8214 | 0.6133 | 0.0270 |
| No log | 0.59 | 250 | 0.0076 | 0.9307 | 0.94 | 0.9353 | 0.6 |
| No log | 0.59 | 250 | 0.0829 | 0.7860 | 0.8989 | 0.8387 | 0.4 |
| No log | 0.59 | 250 | 0.0309 | 0.8423 | 0.935 | 0.8863 | 0.3000 |
| No log | 0.59 | 250 | 0.0308 | 0.8854 | 0.85 | 0.8673 | 0.5 |
| No log | 0.59 | 250 | 0.0247 | 0.9010 | 0.91 | 0.9055 | 0.4 |
| No log | 0.59 | 250 | 0.0284 | 0.8578 | 0.965 | 0.9082 | 0.4 |
| No log | 0.59 | 250 | 0.0207 | 0.9010 | 0.865 | 0.8827 | 0.4 |
| No log | 0.59 | 250 | 0.0356 | 0.8462 | 0.825 | 0.8354 | 0.5 |
| No log | 0.59 | 250 | 0.0195 | 0.8365 | 0.87 | 0.8529 | 0.5 |
| No log | 0.59 | 250 | 0.0418 | 0.7816 | 0.805 | 0.7931 | 0.4 |
| No log | 0.59 | 250 | 0.0498 | 0.8418 | 0.825 | 0.8333 | 0.4 |
| No log | 0.59 | 250 | 0.0026 | 0.995 | 0.995 | 0.995 | 0.2 |
| No log | 0.59 | 250 | 0.0342 | 0.8075 | 0.86 | 0.8329 | 0.5 |
| No log | 0.59 | 250 | 0.0261 | 0.8259 | 0.83 | 0.8279 | 0.5 |
| No log | 0.59 | 250 | 0.0312 | 0.8158 | 0.7828 | 0.7990 | 0.5 |
| No log | 0.59 | 250 | 0.0708 | 0.6948 | 0.7437 | 0.7184 | 0.5 |
| No log | 0.59 | 250 | 0.0244 | 0.8579 | 0.845 | 0.8514 | 0.4 |
| No log | 0.59 | 250 | 0.0174 | 0.8894 | 0.885 | 0.8872 | 0.4 |
| No log | 0.59 | 250 | 0.0101 | 0.9439 | 0.925 | 0.9343 | 0.5 |
| No log | 0.59 | 250 | 0.0325 | 0.7570 | 0.81 | 0.7826 | 0.6 |
| No log | 0.59 | 250 | 0.0319 | 0.8317 | 0.84 | 0.8358 | 0.4 |
| No log | 0.59 | 250 | 0.0304 | 0.8479 | 0.92 | 0.8825 | 0.4 |
| No log | 0.59 | 250 | 0.0278 | 0.7182 | 0.79 | 0.7524 | 0.4 |
| No log | 0.59 | 250 | 0.0305 | 0.8426 | 0.83 | 0.8363 | 0.5 |
| No log | 0.59 | 250 | 0.0252 | 0.9388 | 0.92 | 0.9293 | 0.2 |
| No log | 0.59 | 250 | 0.0623 | 0.7347 | 0.72 | 0.7273 | 0.4 |
| No log | 0.59 | 250 | 0.0106 | 0.9898 | 0.975 | 0.9824 | 0.4 |
| No log | 0.59 | 250 | 0.0009 | 1.0 | 0.995 | 0.9975 | 0.7000 |
| No log | 0.59 | 250 | 0.0244 | 0.8640 | 0.985 | 0.9206 | 0.09 |
| No log | 0.59 | 250 | 0.0411 | 0.8128 | 0.76 | 0.7855 | 0.5 |
| No log | 0.59 | 250 | 0.0431 | 0.7811 | 0.785 | 0.7830 | 0.5 |
| No log | 0.59 | 250 | 0.1814 | 0.4565 | 0.4468 | 0.4516 | 0.3000 |
| No log | 0.59 | 250 | 0.0356 | 0.6789 | 0.645 | 0.6615 | 0.4 |
| No log | 0.59 | 250 | 0.0162 | 0.9368 | 0.89 | 0.9128 | 0.7000 |
| No log | 0.59 | 250 | 0.0266 | 0.8774 | 0.93 | 0.9029 | 0.4 |
| No log | 0.59 | 250 | 0.0098 | 0.9567 | 0.995 | 0.9755 | 0.3000 |
| No log | 0.59 | 250 | 0.0315 | 0.8326 | 0.895 | 0.8627 | 0.2 |
| No log | 0.59 | 250 | 0.0347 | 0.7031 | 0.675 | 0.6888 | 0.5 |
| No log | 0.59 | 250 | 0.0702 | 0.6837 | 0.7538 | 0.7171 | 0.5 |
| No log | 0.59 | 250 | 0.0192 | 0.9057 | 0.96 | 0.9320 | 0.3000 |
| No log | 0.59 | 250 | 0.0222 | 0.8564 | 0.865 | 0.8607 | 0.6 |
| No log | 0.59 | 250 | 0.0078 | 0.9833 | 0.9833 | 0.9833 | 0.3000 |
| No log | 0.59 | 250 | 0.0132 | 0.9154 | 0.92 | 0.9177 | 0.6 |
| No log | 0.59 | 250 | 0.0306 | 0.8645 | 0.925 | 0.8937 | 0.3000 |
| No log | 0.59 | 250 | 0.0120 | 0.8829 | 0.8167 | 0.8485 | 0.3000 |
| No log | 0.59 | 250 | 0.0157 | 0.8832 | 0.945 | 0.9130 | 0.4 |
| No log | 0.59 | 250 | 0.0752 | 0.7355 | 0.89 | 0.8054 | 0.083 |
| No log | 0.59 | 250 | 0.0363 | 0.7876 | 0.7755 | 0.7815 | 0.5 |
| No log | 0.59 | 250 | 0.0039 | 0.9803 | 0.995 | 0.9876 | 0.4 |
| No log | 0.59 | 250 | 0.0714 | 0.7273 | 0.8 | 0.7619 | 0.4 |
| No log | 0.59 | 250 | 0.0349 | 0.5903 | 0.425 | 0.4942 | 0.4 |
| No log | 0.59 | 250 | 0.0230 | 0.9213 | 0.82 | 0.8677 | 0.4 |
| No log | 0.59 | 250 | 0.1112 | 0.6693 | 0.84 | 0.7450 | 0.2 |
| No log | 0.59 | 250 | 0.0728 | 0.5699 | 0.795 | 0.6639 | 0.3000 |
| No log | 0.59 | 250 | 0.0585 | 0.6872 | 0.78 | 0.7307 | 0.2 |
| No log | 0.59 | 250 | 0.1074 | 0.6908 | 0.905 | 0.7835 | 0.0530 |
| No log | 0.59 | 250 | 0.0464 | 0.7489 | 0.865 | 0.8028 | 0.4 |
| No log | 0.59 | 250 | 0.0418 | 0.8009 | 0.845 | 0.8224 | 0.4 |
| No log | 0.59 | 250 | 0.0522 | 0.5385 | 0.4221 | 0.4732 | 0.4 |
| No log | 0.59 | 250 | 0.0541 | 0.7642 | 0.81 | 0.7864 | 0.4 |
| No log | 0.59 | 250 | 0.0529 | 0.7451 | 0.6909 | 0.7170 | 0.6 |
| No log | 0.59 | 250 | 0.0394 | 0.8629 | 0.85 | 0.8564 | 0.3000 |
| No log | 0.59 | 250 | 0.0394 | 0.8629 | 0.85 | 0.8564 | 0.3000 |
| No log | 0.59 | 250 | 0.0359 | 0.8066 | 0.855 | 0.8301 | 0.4 |
| No log | 0.59 | 250 | 0.0512 | 0.7605 | 0.905 | 0.8265 | 0.2 |
| No log | 0.59 | 250 | 0.0331 | 0.8028 | 0.855 | 0.8281 | 0.3000 |
| No log | 0.59 | 250 | 0.0399 | 0.8214 | 0.805 | 0.8131 | 0.5 |
| No log | 0.59 | 250 | 0.0820 | 0.6948 | 0.74 | 0.7167 | 0.3000 |
| No log | 0.59 | 250 | 0.0471 | 0.7465 | 0.81 | 0.7770 | 0.4 |
| No log | 0.59 | 250 | 0.0470 | 0.8065 | 0.875 | 0.8393 | 0.3000 |
| No log | 0.59 | 250 | 0.1420 | 0.6685 | 0.615 | 0.6406 | 0.3000 |
| No log | 0.59 | 250 | 0.0480 | 0.8488 | 0.73 | 0.7849 | 0.6 |
| No log | 0.59 | 250 | 0.0981 | 0.6911 | 0.8543 | 0.7640 | 0.096 |
| No log | 0.59 | 250 | 0.0343 | 0.8 | 0.9 | 0.8471 | 0.3000 |
| No log | 0.59 | 250 | 0.0343 | 0.8 | 0.9 | 0.8471 | 0.3000 |
| No log | 0.59 | 250 | 0.0294 | 0.7381 | 0.6739 | 0.7045 | 0.6 |
| No log | 0.59 | 250 | 0.0294 | 0.7381 | 0.6739 | 0.7045 | 0.6 |
| No log | 0.59 | 250 | 0.0368 | 0.7287 | 0.9 | 0.8054 | 0.2 |
| No log | 0.59 | 250 | 0.0432 | 0.5343 | 0.545 | 0.5396 | 0.5 |
| No log | 0.59 | 250 | 0.0513 | 0.6364 | 0.5385 | 0.5833 | 0.6 |
| No log | 0.59 | 250 | 0.0350 | 0.7897 | 0.77 | 0.7797 | 0.6 |
| No log | 0.59 | 250 | 0.0389 | 0.6154 | 0.64 | 0.6275 | 0.5 |
| No log | 0.59 | 250 | 0.0534 | 0.6332 | 0.915 | 0.7485 | 0.096 |
| No log | 0.59 | 250 | 0.0397 | 0.7959 | 0.78 | 0.7879 | 0.6 |
| No log | 0.59 | 250 | 0.0558 | 0.7591 | 0.835 | 0.7952 | 0.4 |
| No log | 0.59 | 250 | 0.0953 | 0.3636 | 0.4615 | 0.4068 | 0.3000 |
| No log | 0.59 | 250 | 0.0784 | 0.6830 | 0.905 | 0.7785 | 0.2 |
| No log | 0.59 | 250 | 0.0542 | 0.7265 | 0.85 | 0.7834 | 0.4 |
| No log | 0.59 | 250 | 0.0685 | 0.9384 | 0.685 | 0.7919 | 0.9 |
| No log | 0.59 | 250 | 0.0746 | 0.7352 | 0.805 | 0.7685 | 0.7000 |
| No log | 0.59 | 250 | 0.0668 | 0.6236 | 0.845 | 0.7176 | 0.3000 |
| No log | 0.59 | 250 | 0.1244 | 0.8113 | 0.86 | 0.8350 | 0.2 |
| No log | 0.59 | 250 | 0.0662 | 0.6348 | 0.73 | 0.6791 | 0.0870 |
| No log | 0.59 | 250 | 0.0674 | 0.4156 | 0.665 | 0.5115 | 0.2 |
| No log | 0.59 | 250 | 0.0452 | 0.8025 | 0.955 | 0.8721 | 0.9 |
| No log | 0.59 | 250 | 0.0365 | 0.4513 | 0.765 | 0.5677 | 0.094 |
| No log | 0.59 | 250 | 0.0545 | 0.7838 | 0.87 | 0.8246 | 0.3000 |
| No log | 0.59 | 250 | 0.0701 | 0.6875 | 0.9167 | 0.7857 | 0.3000 |
| No log | 0.59 | 250 | 0.0461 | 0.7542 | 0.89 | 0.8165 | 0.3000 |
| No log | 0.59 | 250 | 0.0403 | 0.8317 | 0.865 | 0.8480 | 0.4 |
| No log | 0.59 | 250 | 0.0574 | 0.6506 | 0.81 | 0.7216 | 0.3000 |
| No log | 0.59 | 250 | 0.0474 | 0.7258 | 0.9 | 0.8036 | 0.3000 |
| No log | 0.59 | 250 | 0.0469 | 0.5407 | 0.665 | 0.5964 | 0.4 |
| No log | 0.59 | 250 | 0.0278 | 0.8732 | 0.93 | 0.9007 | 0.2 |
| No log | 0.59 | 250 | 0.0951 | 0.3683 | 0.58 | 0.4505 | 0.3000 |
| No log | 0.59 | 250 | 0.0494 | 0.7284 | 0.8894 | 0.8009 | 0.3000 |
| No log | 0.59 | 250 | 0.0923 | 0.4820 | 0.6505 | 0.5537 | 0.2 |
| No log | 0.59 | 250 | 0.0403 | 0.6170 | 0.87 | 0.7220 | 0.098 |
| No log | 0.59 | 250 | 0.0362 | 0.8762 | 0.885 | 0.8806 | 0.5 |
| No log | 0.59 | 250 | 0.0599 | 0.8436 | 0.89 | 0.8662 | 0.2 |
| No log | 0.59 | 250 | 0.0599 | 0.8436 | 0.89 | 0.8662 | 0.2 |
| No log | 0.59 | 250 | 0.0441 | 0.6895 | 0.655 | 0.6718 | 0.4 |
| No log | 0.59 | 250 | 0.0587 | 0.8052 | 0.9394 | 0.8671 | 0.3000 |
| No log | 0.59 | 250 | 0.0451 | 0.6810 | 0.7940 | 0.7332 | 0.4 |
| No log | 0.59 | 250 | 0.0545 | 0.6481 | 0.93 | 0.7639 | 0.2 |
| No log | 0.59 | 250 | 0.0452 | 0.7692 | 0.85 | 0.8076 | 0.2 |
| No log | 0.59 | 250 | 0.0403 | 0.8112 | 0.795 | 0.8030 | 0.5 |
| No log | 0.59 | 250 | 0.0507 | 0.7402 | 0.755 | 0.7475 | 0.7000 |
| No log | 0.59 | 250 | 0.0502 | 0.7288 | 0.86 | 0.7890 | 0.3000 |
| No log | 0.59 | 250 | 0.0390 | 0.8558 | 0.89 | 0.8725 | 0.4 |
| No log | 0.59 | 250 | 0.0446 | 0.7395 | 0.795 | 0.7663 | 0.4 |
| No log | 0.59 | 250 | 0.0323 | 0.8528 | 0.84 | 0.8463 | 0.4 |
| No log | 0.59 | 250 | 0.0651 | 0.7269 | 0.865 | 0.7900 | 0.2 |
| No log | 0.59 | 250 | 0.0457 | 0.4610 | 0.62 | 0.5288 | 0.2 |
| No log | 0.59 | 250 | 0.0547 | 0.5138 | 0.745 | 0.6082 | 0.4 |
| No log | 0.59 | 250 | 0.0424 | 0.8444 | 0.76 | 0.8 | 0.4 |
| No log | 0.59 | 250 | 0.0590 | 0.5836 | 0.82 | 0.6819 | 0.5 |
| No log | 0.59 | 250 | 0.0582 | 0.7085 | 0.875 | 0.7830 | 0.3000 |
| No log | 0.59 | 250 | 0.0376 | 0.7915 | 0.835 | 0.8127 | 0.4 |
| No log | 0.59 | 250 | 0.0950 | 0.5033 | 0.755 | 0.604 | 0.3000 |
| No log | 0.59 | 250 | 0.0679 | 0.8182 | 0.765 | 0.7907 | 0.5 |
| No log | 0.59 | 250 | 0.0497 | 0.6545 | 0.805 | 0.7220 | 0.5 |
| No log | 0.59 | 250 | 0.0497 | 0.6545 | 0.805 | 0.7220 | 0.5 |
| No log | 0.59 | 250 | 0.0497 | 0.6545 | 0.805 | 0.7220 | 0.5 |
| No log | 0.59 | 250 | 0.0497 | 0.6545 | 0.805 | 0.7220 | 0.5 |
| No log | 0.59 | 250 | 0.0850 | 0.5812 | 0.6869 | 0.6296 | 0.2 |
| No log | 0.59 | 250 | 0.0531 | 0.7629 | 0.7475 | 0.7551 | 0.5 |
| No log | 0.59 | 250 | 0.0163 | 0.9559 | 0.975 | 0.9653 | 0.5 |
| No log | 0.59 | 250 | 0.0020 | 0.9901 | 1.0 | 0.9950 | 0.6 |
| No log | 0.59 | 250 | 0.0033 | 0.995 | 0.995 | 0.995 | 0.7000 |
| No log | 0.59 | 250 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 0.59 | 250 | 0.0006 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 0.59 | 250 | 0.0007 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 0.59 | 250 | 0.0012 | 1.0 | 0.995 | 0.9975 | 0.9 |
| No log | 0.59 | 250 | 0.0020 | 0.9901 | 1.0 | 0.9950 | 0.6 |
| No log | 0.59 | 250 | 0.0026 | 0.9851 | 0.995 | 0.9900 | 0.3000 |
| No log | 0.59 | 250 | 0.0012 | 0.995 | 0.995 | 0.995 | 0.5 |
| No log | 0.59 | 250 | 0.0175 | 0.9347 | 0.93 | 0.9323 | 0.2 |
| No log | 0.59 | 250 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 0.59 | 250 | 0.0274 | 0.9282 | 0.84 | 0.8819 | 0.2 |
| No log | 0.59 | 250 | 0.0018 | 0.9901 | 1.0 | 0.9950 | 0.6 |
| No log | 0.59 | 250 | 0.0008 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 0.59 | 250 | 0.0032 | 0.995 | 0.995 | 0.995 | 0.8 |
| No log | 0.59 | 250 | 0.0058 | 0.9751 | 0.98 | 0.9776 | 0.8 |
| No log | 0.59 | 250 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.8 |
| No log | 0.59 | 250 | 0.0007 | 1.0 | 1.0 | 1.0 | 0.7000 |
| No log | 0.59 | 250 | 0.0034 | 0.9900 | 0.995 | 0.9925 | 0.3000 |
| No log | 0.59 | 250 | 0.0023 | 0.995 | 0.995 | 0.995 | 0.8 |
| No log | 0.59 | 250 | 0.0076 | 0.9848 | 0.97 | 0.9773 | 0.4 |
| No log | 0.59 | 250 | 0.1356 | 0.4509 | 0.505 | 0.4764 | 0.8 |
| No log | 0.59 | 250 | 0.1014 | 0.2633 | 0.5448 | 0.3551 | 0.3000 |
| No log | 0.59 | 250 | 0.1233 | 0.6832 | 0.69 | 0.6866 | 0.4 |
| No log | 0.59 | 250 | 0.1224 | 0.6552 | 0.665 | 0.6600 | 0.6 |
| No log | 1.17 | 500 | 0.0398 | 0.9624 | 0.895 | 0.9275 | 0.6 |
| No log | 1.17 | 500 | 0.0135 | 0.9072 | 0.88 | 0.8934 | 0.5 |
| No log | 1.17 | 500 | 0.0329 | 0.8738 | 0.9 | 0.8867 | 0.4 |
| No log | 1.17 | 500 | 0.0180 | 0.8682 | 0.955 | 0.9095 | 0.2 |
| No log | 1.17 | 500 | 0.0361 | 0.9482 | 0.915 | 0.9313 | 0.5 |
| No log | 1.17 | 500 | 0.0096 | 0.9802 | 0.99 | 0.9851 | 0.7000 |
| No log | 1.17 | 500 | 0.0157 | 0.9139 | 0.9598 | 0.9363 | 0.4 |
| No log | 1.17 | 500 | 0.0098 | 0.9660 | 0.995 | 0.9803 | 0.4 |
| No log | 1.17 | 500 | 0.0134 | 0.9390 | 1.0 | 0.9685 | 0.3000 |
| No log | 1.17 | 500 | 0.0326 | 0.9831 | 0.875 | 0.9259 | 0.8 |
| No log | 1.17 | 500 | 0.0092 | 0.9567 | 0.995 | 0.9755 | 0.5 |
| No log | 1.17 | 500 | 0.0139 | 0.9420 | 0.975 | 0.9582 | 0.6 |
| No log | 1.17 | 500 | 0.0072 | 0.9615 | 1.0 | 0.9804 | 0.4 |
| No log | 1.17 | 500 | 0.0197 | 0.9474 | 0.99 | 0.9682 | 0.7000 |
| No log | 1.17 | 500 | 0.0130 | 0.9519 | 0.99 | 0.9706 | 0.5 |
| No log | 1.17 | 500 | 0.0107 | 0.9426 | 0.985 | 0.9633 | 0.5 |
| No log | 1.17 | 500 | 0.0108 | 0.9512 | 0.9898 | 0.9701 | 0.6 |
| No log | 1.17 | 500 | 0.0130 | 0.9554 | 0.965 | 0.9602 | 0.7000 |
| No log | 1.17 | 500 | 0.0485 | 0.8724 | 0.8593 | 0.8658 | 0.6 |
| No log | 1.17 | 500 | 0.0152 | 0.9259 | 1.0 | 0.9615 | 0.083 |
| No log | 1.17 | 500 | 0.0115 | 0.9292 | 0.985 | 0.9563 | 0.2 |
| No log | 1.17 | 500 | 0.0400 | 0.9789 | 0.93 | 0.9538 | 0.5 |
| No log | 1.17 | 500 | 0.0051 | 0.9895 | 0.945 | 0.9668 | 0.8 |
| No log | 1.17 | 500 | 0.0064 | 0.9749 | 0.9848 | 0.9798 | 0.4 |
| No log | 1.17 | 500 | 0.0198 | 0.9563 | 0.985 | 0.9704 | 0.4 |
| No log | 1.17 | 500 | 0.0579 | 0.8929 | 0.875 | 0.8838 | 0.5 |
| No log | 1.17 | 500 | 0.0066 | 0.9314 | 0.9548 | 0.9429 | 0.5 |
| No log | 1.17 | 500 | 0.0232 | 0.8991 | 0.98 | 0.9378 | 0.3000 |
| No log | 1.17 | 500 | 0.0185 | 0.9 | 0.945 | 0.9220 | 0.5 |
| No log | 1.17 | 500 | 0.0150 | 0.9431 | 0.995 | 0.9684 | 0.08 |
| No log | 1.17 | 500 | 0.0322 | 0.9409 | 0.955 | 0.9479 | 0.4 |
| No log | 1.17 | 500 | 0.0253 | 0.9296 | 0.99 | 0.9588 | 0.5 |
| No log | 1.17 | 500 | 0.0130 | 0.9548 | 0.95 | 0.9524 | 0.4 |
| No log | 1.17 | 500 | 0.0121 | 0.9662 | 1.0 | 0.9828 | 0.3000 |
| No log | 1.17 | 500 | 0.0103 | 0.97 | 0.97 | 0.97 | 0.7000 |
| No log | 1.17 | 500 | 0.0836 | 0.8579 | 0.845 | 0.8514 | 0.3000 |
| No log | 1.17 | 500 | 0.0109 | 0.9378 | 0.98 | 0.9584 | 0.2 |
| No log | 1.17 | 500 | 0.0174 | 0.9752 | 0.985 | 0.9801 | 0.2 |
| No log | 1.17 | 500 | 0.0834 | 0.7175 | 0.8040 | 0.7583 | 0.0730 |
| No log | 1.17 | 500 | 0.0417 | 0.9534 | 0.9246 | 0.9388 | 0.3000 |
| No log | 1.17 | 500 | 0.0507 | 0.8447 | 0.925 | 0.8831 | 0.5 |
| No log | 1.17 | 500 | 0.0123 | 0.9502 | 0.955 | 0.9526 | 0.2 |
| No log | 1.17 | 500 | 0.0090 | 0.985 | 0.985 | 0.985 | 0.6 |
| No log | 1.17 | 500 | 0.0050 | 0.9747 | 0.9698 | 0.9723 | 0.5 |
| No log | 1.17 | 500 | 0.0072 | 0.9423 | 0.98 | 0.9608 | 0.4 |
| No log | 1.17 | 500 | 0.0048 | 0.965 | 0.965 | 0.965 | 0.8 |
| No log | 1.17 | 500 | 0.0096 | 0.9569 | 1.0 | 0.9780 | 0.5 |
| No log | 1.17 | 500 | 0.0338 | 0.8935 | 0.965 | 0.9279 | 0.5 |
| No log | 1.17 | 500 | 0.0145 | 0.9803 | 0.995 | 0.9876 | 0.3000 |
| No log | 1.17 | 500 | 0.0205 | 0.9701 | 0.975 | 0.9726 | 0.7000 |
| No log | 1.17 | 500 | 0.0154 | 0.98 | 0.98 | 0.98 | 0.8 |
| No log | 1.17 | 500 | 0.0060 | 0.9023 | 0.9848 | 0.9417 | 0.6 |
| No log | 1.17 | 500 | 0.0739 | 0.7833 | 0.795 | 0.7891 | 0.2 |
| No log | 1.17 | 500 | 0.0646 | 0.8565 | 0.895 | 0.8753 | 0.3000 |
| No log | 1.17 | 500 | 0.0105 | 0.9614 | 0.995 | 0.9779 | 0.6 |
| No log | 1.17 | 500 | 0.0139 | 0.9569 | 1.0 | 0.9780 | 0.2 |
| No log | 1.17 | 500 | 0.1061 | 0.8865 | 0.82 | 0.8519 | 0.4 |
| No log | 1.17 | 500 | 0.0073 | 0.9802 | 0.99 | 0.9851 | 0.6 |
| No log | 1.17 | 500 | 0.1253 | 0.8956 | 0.815 | 0.8534 | 0.6 |
| No log | 1.17 | 500 | 0.0115 | 0.9434 | 1.0 | 0.9709 | 0.5 |
| No log | 1.17 | 500 | 0.0091 | 0.9754 | 0.99 | 0.9826 | 0.5 |
| No log | 1.17 | 500 | 0.0067 | 0.96 | 0.96 | 0.96 | 0.9 |
| No log | 1.17 | 500 | 0.0509 | 0.9020 | 0.92 | 0.9109 | 0.4 |
| No log | 1.17 | 500 | 0.0068 | 0.9707 | 0.995 | 0.9827 | 0.5 |
| No log | 1.17 | 500 | 0.0121 | 0.9524 | 1.0 | 0.9756 | 0.8 |
| No log | 1.17 | 500 | 0.0091 | 0.9565 | 0.99 | 0.9730 | 0.8 |
| No log | 1.17 | 500 | 0.0151 | 0.9567 | 0.995 | 0.9755 | 0.2 |
| No log | 1.17 | 500 | 0.0080 | 0.9615 | 1.0 | 0.9804 | 0.3000 |
| No log | 1.17 | 500 | 0.0335 | 0.9480 | 0.82 | 0.8794 | 0.6 |
| No log | 1.17 | 500 | 0.0603 | 0.8673 | 0.915 | 0.8905 | 0.2 |
| No log | 1.17 | 500 | 0.0089 | 0.9282 | 0.97 | 0.9487 | 0.4 |
| No log | 1.17 | 500 | 0.0636 | 0.8374 | 0.85 | 0.8437 | 0.6 |
| No log | 1.17 | 500 | 0.0117 | 0.9479 | 1.0 | 0.9732 | 0.2 |
| No log | 1.17 | 500 | 0.0157 | 0.9387 | 0.995 | 0.9660 | 0.4 |
| No log | 1.17 | 500 | 0.0175 | 0.8911 | 0.9 | 0.8955 | 0.6 |
| No log | 1.17 | 500 | 0.0080 | 0.9447 | 0.94 | 0.9424 | 0.5 |
| No log | 1.17 | 500 | 0.0185 | 0.9429 | 0.99 | 0.9659 | 0.3000 |
| No log | 1.17 | 500 | 0.0149 | 0.9585 | 0.925 | 0.9415 | 0.6 |
| No log | 1.17 | 500 | 0.0488 | 0.9381 | 0.91 | 0.9239 | 0.3000 |
| No log | 1.17 | 500 | 0.0230 | 0.8493 | 0.93 | 0.8878 | 0.4 |
| No log | 1.17 | 500 | 0.0563 | 0.7934 | 0.8535 | 0.8224 | 0.0360 |
| No log | 1.17 | 500 | 0.0269 | 0.9554 | 0.965 | 0.9602 | 0.3000 |
| No log | 1.17 | 500 | 0.1266 | 0.8245 | 0.775 | 0.7990 | 0.5 |
| No log | 1.17 | 500 | 0.0216 | 0.8912 | 0.86 | 0.8753 | 0.6 |
| No log | 1.17 | 500 | 0.0612 | 0.8235 | 0.77 | 0.7959 | 0.5 |
| No log | 1.17 | 500 | 0.0219 | 0.8326 | 0.945 | 0.8852 | 0.3000 |
| No log | 1.17 | 500 | 0.0877 | 0.8134 | 0.85 | 0.8313 | 0.4 |
| No log | 1.17 | 500 | 0.0911 | 0.7339 | 0.91 | 0.8125 | 0.3000 |
| No log | 1.17 | 500 | 0.0649 | 0.6034 | 0.5427 | 0.5714 | 0.4 |
| No log | 1.17 | 500 | 0.0510 | 0.7863 | 0.92 | 0.8479 | 0.3000 |
| No log | 1.17 | 500 | 0.0517 | 0.8646 | 0.83 | 0.8469 | 0.6 |
| No log | 1.17 | 500 | 0.1045 | 0.75 | 0.825 | 0.7857 | 0.4 |
| No log | 1.17 | 500 | 0.0501 | 0.8153 | 0.905 | 0.8578 | 0.4 |
| No log | 1.17 | 500 | 0.0281 | 0.8676 | 0.885 | 0.8762 | 0.6 |
| No log | 1.17 | 500 | 0.0687 | 0.7626 | 0.835 | 0.7971 | 0.4 |
| No log | 1.17 | 500 | 0.0618 | 0.9158 | 0.87 | 0.8923 | 0.7000 |
| No log | 1.17 | 500 | 0.0542 | 0.8966 | 0.91 | 0.9032 | 0.6 |
| No log | 1.17 | 500 | 0.0492 | 0.8160 | 0.865 | 0.8398 | 0.5 |
| No log | 1.17 | 500 | 0.0379 | 0.8199 | 0.8693 | 0.8439 | 0.5 |
| No log | 1.17 | 500 | 0.0611 | 0.8033 | 0.735 | 0.7676 | 0.6 |
| No log | 1.17 | 500 | 0.0738 | 0.8521 | 0.72 | 0.7805 | 0.7000 |
| No log | 1.17 | 500 | 0.0587 | 0.8278 | 0.865 | 0.8460 | 0.4 |
| No log | 1.17 | 500 | 0.0404 | 0.7851 | 0.895 | 0.8364 | 0.3000 |
| No log | 1.17 | 500 | 0.1348 | 0.8066 | 0.855 | 0.8301 | 0.2 |
| No log | 1.17 | 500 | 0.0234 | 0.8833 | 0.795 | 0.8368 | 0.7000 |
| No log | 1.17 | 500 | 0.0426 | 0.7860 | 0.9137 | 0.8451 | 0.2 |
| No log | 1.17 | 500 | 0.0693 | 0.8198 | 0.91 | 0.8626 | 0.4 |
| No log | 1.17 | 500 | 0.0884 | 0.8012 | 0.6884 | 0.7405 | 0.5 |
| No log | 1.17 | 500 | 0.0239 | 0.7861 | 0.79 | 0.7880 | 0.4 |
| No log | 1.17 | 500 | 0.0901 | 0.7929 | 0.785 | 0.7889 | 0.6 |
| No log | 1.17 | 500 | 0.0367 | 0.8342 | 0.78 | 0.8062 | 0.5 |
| No log | 1.17 | 500 | 0.0672 | 0.8715 | 0.78 | 0.8232 | 0.6 |
| No log | 1.17 | 500 | 0.0703 | 0.8389 | 0.885 | 0.8613 | 0.5 |
| No log | 1.17 | 500 | 0.1127 | 0.7628 | 0.82 | 0.7904 | 0.5 |
| No log | 1.17 | 500 | 0.0777 | 0.7358 | 0.78 | 0.7573 | 0.3000 |
| No log | 1.17 | 500 | 0.0656 | 0.75 | 0.855 | 0.7991 | 0.4 |
| No log | 1.17 | 500 | 0.0498 | 0.8255 | 0.8794 | 0.8516 | 0.5 |
| No log | 1.17 | 500 | 0.1483 | 0.7183 | 0.765 | 0.7409 | 0.2 |
| No log | 1.17 | 500 | 0.0800 | 0.4370 | 0.555 | 0.4890 | 0.3000 |
| No log | 1.17 | 500 | 0.1018 | 0.9106 | 0.815 | 0.8602 | 0.6 |
| No log | 1.17 | 500 | 0.1469 | 0.4916 | 0.7387 | 0.5904 | 0.025 |
| No log | 1.17 | 500 | 0.0849 | 0.9053 | 0.7688 | 0.8315 | 0.5 |
| No log | 1.17 | 500 | 0.0896 | 0.6703 | 0.935 | 0.7808 | 0.2 |
| No log | 1.17 | 500 | 0.0276 | 0.9341 | 0.8586 | 0.8947 | 0.3000 |
| No log | 1.17 | 500 | 0.0701 | 0.8618 | 0.935 | 0.8969 | 0.4 |
| No log | 1.17 | 500 | 0.0266 | 0.8660 | 0.8442 | 0.8550 | 0.5 |
| No log | 1.17 | 500 | 0.0359 | 0.7752 | 0.845 | 0.8086 | 0.5 |
| No log | 1.17 | 500 | 0.0428 | 0.7636 | 0.63 | 0.6904 | 0.6 |
| No log | 1.17 | 500 | 0.0652 | 0.8366 | 0.845 | 0.8408 | 0.6 |
| No log | 1.17 | 500 | 0.0638 | 0.6840 | 0.79 | 0.7332 | 0.4 |
| No log | 1.17 | 500 | 0.0560 | 0.9175 | 0.945 | 0.9310 | 0.4 |
| No log | 1.17 | 500 | 0.0708 | 0.8010 | 0.785 | 0.7929 | 0.5 |
| No log | 1.17 | 500 | 0.1218 | 0.7051 | 0.825 | 0.7604 | 0.4 |
| No log | 1.17 | 500 | 0.0212 | 0.8246 | 0.7085 | 0.7622 | 0.7000 |
| No log | 1.17 | 500 | 0.1050 | 0.7208 | 0.7136 | 0.7172 | 0.2 |
| No log | 1.17 | 500 | 0.0946 | 0.8653 | 0.835 | 0.8499 | 0.5 |
| No log | 1.17 | 500 | 0.0515 | 0.8365 | 0.87 | 0.8529 | 0.6 |
| No log | 1.17 | 500 | 0.0578 | 0.8514 | 0.945 | 0.8957 | 0.4 |
| No log | 1.17 | 500 | 0.1081 | 0.84 | 0.735 | 0.7840 | 0.4 |
| No log | 1.17 | 500 | 0.0563 | 0.8594 | 0.825 | 0.8418 | 0.6 |
| No log | 1.17 | 500 | 0.1341 | 0.8220 | 0.785 | 0.8031 | 0.5 |
| No log | 1.17 | 500 | 0.0407 | 0.8317 | 0.865 | 0.8480 | 0.6 |
| No log | 1.17 | 500 | 0.0569 | 0.9061 | 0.82 | 0.8609 | 0.6 |
| No log | 1.17 | 500 | 0.0167 | 0.8844 | 0.88 | 0.8822 | 0.7000 |
| No log | 1.17 | 500 | 0.1030 | 0.704 | 0.88 | 0.7822 | 0.3000 |
| No log | 1.17 | 500 | 0.0379 | 0.8796 | 0.84 | 0.8593 | 0.7000 |
| No log | 1.17 | 500 | 0.0616 | 0.8125 | 0.845 | 0.8284 | 0.5 |
| No log | 1.17 | 500 | 0.0426 | 0.8293 | 0.85 | 0.8395 | 0.5 |
| No log | 1.17 | 500 | 0.0920 | 0.8387 | 0.78 | 0.8083 | 0.6 |
| No log | 1.17 | 500 | 0.0370 | 0.9162 | 0.875 | 0.8951 | 0.7000 |
| No log | 1.17 | 500 | 0.0719 | 0.6995 | 0.64 | 0.6684 | 0.3000 |
| No log | 1.17 | 500 | 0.1296 | 0.7042 | 0.75 | 0.7264 | 0.3000 |
| No log | 1.17 | 500 | 0.0285 | 0.8439 | 0.865 | 0.8543 | 0.4 |
| No log | 1.17 | 500 | 0.0734 | 0.7358 | 0.905 | 0.8117 | 0.4 |
| No log | 1.17 | 500 | 0.0920 | 0.8259 | 0.83 | 0.8279 | 0.5 |
| No log | 1.17 | 500 | 0.0570 | 0.8066 | 0.855 | 0.8301 | 0.6 |
| No log | 1.17 | 500 | 0.0259 | 0.8447 | 0.87 | 0.8571 | 0.6 |
| No log | 1.17 | 500 | 0.0163 | 0.8356 | 0.915 | 0.8735 | 0.3000 |
| No log | 1.17 | 500 | 0.1137 | 0.7364 | 0.81 | 0.7714 | 0.5 |
| No log | 1.17 | 500 | 0.0606 | 0.6230 | 0.76 | 0.6847 | 0.3000 |
| No log | 1.17 | 500 | 0.0823 | 0.8619 | 0.905 | 0.8829 | 0.2 |
| No log | 1.17 | 500 | 0.1016 | 0.45 | 0.675 | 0.54 | 0.0870 |
| No log | 1.17 | 500 | 0.0441 | 0.8385 | 0.8131 | 0.8256 | 0.3000 |
| No log | 1.17 | 500 | 0.0696 | 0.8473 | 0.86 | 0.8536 | 0.3000 |
| No log | 1.17 | 500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.003 |
| No log | 1.17 | 500 | 0.0121 | 0.7224 | 0.9645 | 0.8261 | 0.6 |
| No log | 1.17 | 500 | 0.0043 | 0.9643 | 0.945 | 0.9545 | 0.6 |
| No log | 1.17 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.0520 |
| No log | 1.17 | 500 | 0.0018 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.17 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.075 |
| No log | 1.17 | 500 | 0.0048 | 0.9947 | 1.0 | 0.9973 | 0.039 |
| No log | 1.17 | 500 | 0.0017 | 0.995 | 0.995 | 0.995 | 0.5 |
| No log | 1.17 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 1.17 | 500 | 0.0036 | 0.9851 | 0.995 | 0.9900 | 0.8 |
| No log | 1.17 | 500 | 0.0014 | 0.9950 | 1.0 | 0.9975 | 0.4 |
| No log | 1.17 | 500 | 0.0080 | 0.9751 | 0.98 | 0.9776 | 0.0090 |
| No log | 1.17 | 500 | 0.0132 | 0.9947 | 0.93 | 0.9612 | 0.2 |
| No log | 1.17 | 500 | 0.0010 | 1.0 | 0.995 | 0.9975 | 0.9 |
| No log | 1.17 | 500 | 0.0184 | 0.9946 | 0.915 | 0.9531 | 0.8 |
| No log | 1.17 | 500 | 0.0018 | 0.9901 | 1.0 | 0.9950 | 0.0880 |
| No log | 1.17 | 500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.008 |
| No log | 1.17 | 500 | 0.0033 | 0.99 | 0.99 | 0.99 | 0.3000 |
| No log | 1.17 | 500 | 0.0011 | 1.0 | 0.995 | 0.9975 | 0.6 |
| No log | 1.17 | 500 | 0.0045 | 0.9612 | 0.99 | 0.9754 | 0.5 |
| No log | 1.17 | 500 | 0.0386 | 0.8925 | 0.83 | 0.8601 | 0.6 |
| No log | 1.17 | 500 | 0.0005 | 0.9950 | 1.0 | 0.9975 | 0.089 |
| No log | 1.17 | 500 | 0.0022 | 1.0 | 0.98 | 0.9899 | 0.4 |
| No log | 1.17 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.17 | 500 | 0.0007 | 0.9950 | 1.0 | 0.9975 | 0.5 |
| No log | 1.17 | 500 | 0.0016 | 0.995 | 0.995 | 0.995 | 0.5 |
| No log | 1.17 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 1.17 | 500 | 0.0048 | 0.9333 | 0.98 | 0.9561 | 0.3000 |
| No log | 1.17 | 500 | 0.0018 | 0.9901 | 1.0 | 0.9950 | 0.2 |
| No log | 1.17 | 500 | 0.0163 | 0.9846 | 0.96 | 0.9722 | 0.034 |
| No log | 1.17 | 500 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.17 | 500 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 1.17 | 500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.025 |
| No log | 1.17 | 500 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.17 | 500 | 0.0022 | 0.9804 | 1.0 | 0.9901 | 0.2 |
| No log | 1.17 | 500 | 0.0007 | 0.9950 | 1.0 | 0.9975 | 0.3000 |
| No log | 1.17 | 500 | 0.0040 | 0.9792 | 1.0 | 0.9895 | 0.067 |
| No log | 1.17 | 500 | 0.0043 | 0.9840 | 0.92 | 0.9509 | 0.8 |
| No log | 1.17 | 500 | 0.0011 | 0.9950 | 1.0 | 0.9975 | 0.5 |
| No log | 1.17 | 500 | 0.0021 | 0.9852 | 1.0 | 0.9926 | 0.4 |
| No log | 1.17 | 500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 1.17 | 500 | 0.0018 | 0.9950 | 0.99 | 0.9925 | 0.7000 |
| No log | 1.17 | 500 | 0.0017 | 0.9949 | 0.985 | 0.9899 | 0.5 |
| No log | 1.17 | 500 | 0.0200 | 0.9095 | 0.9646 | 0.9363 | 0.6 |
| No log | 1.17 | 500 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.8 |
| No log | 1.17 | 500 | 0.0029 | 0.9949 | 0.985 | 0.9899 | 0.9 |
| No log | 1.17 | 500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.004 |
| No log | 1.17 | 500 | 0.0004 | 0.9950 | 1.0 | 0.9975 | 0.3000 |
| No log | 1.17 | 500 | 0.0126 | 0.9522 | 0.995 | 0.9731 | 0.048 |
| No log | 1.17 | 500 | 0.0006 | 0.9917 | 1.0 | 0.9959 | 0.0090 |
| No log | 1.17 | 500 | 0.0146 | 0.9179 | 0.895 | 0.9063 | 0.5 |
| No log | 1.17 | 500 | 0.0035 | 0.9950 | 1.0 | 0.9975 | 0.035 |
| No log | 1.17 | 500 | 0.0013 | 0.9852 | 1.0 | 0.9926 | 0.6 |
| No log | 1.17 | 500 | 0.0016 | 0.9901 | 1.0 | 0.9950 | 0.3000 |
| No log | 1.17 | 500 | 0.0017 | 0.995 | 0.995 | 0.995 | 0.7000 |
| No log | 1.17 | 500 | 0.0108 | 0.7773 | 0.995 | 0.8728 | 0.0370 |
| No log | 1.17 | 500 | 0.0058 | 1.0 | 0.99 | 0.9950 | 0.012 |
| No log | 1.17 | 500 | 0.0059 | 0.9948 | 0.965 | 0.9797 | 0.8 |
| No log | 1.17 | 500 | 0.0150 | 0.7078 | 0.8731 | 0.7818 | 0.5 |
| No log | 1.17 | 500 | 0.0070 | 0.9175 | 0.945 | 0.9310 | 0.4 |
| No log | 1.17 | 500 | 0.0445 | 0.89 | 0.89 | 0.89 | 0.5 |
| No log | 1.17 | 500 | 0.1451 | 0.6562 | 0.75 | 0.7 | 0.093 |
| No log | 1.17 | 500 | 0.0068 | 0.9356 | 0.945 | 0.9403 | 0.6 |
| No log | 1.17 | 500 | 0.0848 | 0.8298 | 0.8298 | 0.8298 | 0.5 |
| No log | 1.17 | 500 | 0.0286 | 0.8507 | 0.94 | 0.8931 | 0.3000 |
| No log | 1.17 | 500 | 0.0276 | 0.8241 | 0.89 | 0.8558 | 0.3000 |
| No log | 1.17 | 500 | 0.0253 | 0.8785 | 0.94 | 0.9082 | 0.3000 |
| No log | 1.17 | 500 | 0.0263 | 0.8986 | 0.93 | 0.9140 | 0.6 |
| No log | 1.17 | 500 | 0.0221 | 0.9171 | 0.885 | 0.9008 | 0.4 |
| No log | 1.17 | 500 | 0.0328 | 0.8811 | 0.815 | 0.8468 | 0.6 |
| No log | 1.17 | 500 | 0.0190 | 0.85 | 0.85 | 0.85 | 0.5 |
| No log | 1.17 | 500 | 0.0393 | 0.7887 | 0.84 | 0.8136 | 0.4 |
| No log | 1.17 | 500 | 0.0500 | 0.835 | 0.835 | 0.835 | 0.4 |
| No log | 1.17 | 500 | 0.0026 | 0.9852 | 1.0 | 0.9926 | 0.0860 |
| No log | 1.17 | 500 | 0.0326 | 0.8173 | 0.85 | 0.8333 | 0.5 |
| No log | 1.17 | 500 | 0.0262 | 0.8230 | 0.86 | 0.8411 | 0.4 |
| No log | 1.17 | 500 | 0.0280 | 0.8290 | 0.8081 | 0.8184 | 0.4 |
| No log | 1.17 | 500 | 0.0670 | 0.6941 | 0.7638 | 0.7273 | 0.4 |
| No log | 1.17 | 500 | 0.0241 | 0.8883 | 0.795 | 0.8391 | 0.4 |
| No log | 1.17 | 500 | 0.0177 | 0.9072 | 0.88 | 0.8934 | 0.4 |
| No log | 1.17 | 500 | 0.0075 | 0.9461 | 0.965 | 0.9554 | 0.3000 |
| No log | 1.17 | 500 | 0.0321 | 0.7892 | 0.805 | 0.7970 | 0.6 |
| No log | 1.17 | 500 | 0.0315 | 0.8122 | 0.865 | 0.8378 | 0.3000 |
| No log | 1.17 | 500 | 0.0288 | 0.8702 | 0.905 | 0.8873 | 0.5 |
| No log | 1.17 | 500 | 0.0278 | 0.8476 | 0.695 | 0.7637 | 0.7000 |
| No log | 1.17 | 500 | 0.0287 | 0.8238 | 0.865 | 0.8439 | 0.4 |
| No log | 1.17 | 500 | 0.0295 | 0.9565 | 0.88 | 0.9167 | 0.3000 |
| No log | 1.17 | 500 | 0.0628 | 0.7487 | 0.73 | 0.7392 | 0.4 |
| No log | 1.17 | 500 | 0.0120 | 0.985 | 0.985 | 0.985 | 0.3000 |
| No log | 1.17 | 500 | 0.0010 | 0.9901 | 1.0 | 0.9950 | 0.083 |
| No log | 1.17 | 500 | 0.0157 | 0.96 | 0.96 | 0.96 | 0.3000 |
| No log | 1.17 | 500 | 0.0430 | 0.7917 | 0.76 | 0.7755 | 0.4 |
| No log | 1.17 | 500 | 0.0453 | 0.7756 | 0.795 | 0.7852 | 0.5 |
| No log | 1.17 | 500 | 0.1325 | 0.5231 | 0.7234 | 0.6071 | 0.2 |
| No log | 1.17 | 500 | 0.0349 | 0.7151 | 0.665 | 0.6891 | 0.4 |
| No log | 1.17 | 500 | 0.0164 | 0.9282 | 0.905 | 0.9165 | 0.6 |
| No log | 1.17 | 500 | 0.0237 | 0.9038 | 0.94 | 0.9216 | 0.4 |
| No log | 1.17 | 500 | 0.0036 | 0.9900 | 0.995 | 0.9925 | 0.3000 |
| No log | 1.17 | 500 | 0.0294 | 0.8706 | 0.875 | 0.8728 | 0.3000 |
| No log | 1.17 | 500 | 0.0335 | 0.6728 | 0.73 | 0.7002 | 0.5 |
| No log | 1.17 | 500 | 0.0662 | 0.6996 | 0.8 | 0.7464 | 0.4 |
| No log | 1.17 | 500 | 0.0199 | 0.9175 | 0.945 | 0.9310 | 0.3000 |
| No log | 1.17 | 500 | 0.0218 | 0.865 | 0.865 | 0.865 | 0.6 |
| No log | 1.17 | 500 | 0.0044 | 0.9836 | 1.0 | 0.9917 | 0.2 |
| No log | 1.17 | 500 | 0.0119 | 0.9476 | 0.905 | 0.9258 | 0.6 |
| No log | 1.17 | 500 | 0.0291 | 0.8841 | 0.915 | 0.8993 | 0.3000 |
| No log | 1.17 | 500 | 0.0091 | 0.9083 | 0.9083 | 0.9083 | 0.3000 |
| No log | 1.17 | 500 | 0.0158 | 0.8981 | 0.925 | 0.9113 | 0.4 |
| No log | 1.17 | 500 | 0.0683 | 0.8009 | 0.845 | 0.8224 | 0.2 |
| No log | 1.17 | 500 | 0.0368 | 0.7018 | 0.8163 | 0.7547 | 0.3000 |
| No log | 1.17 | 500 | 0.0034 | 0.9756 | 1.0 | 0.9877 | 0.3000 |
| No log | 1.17 | 500 | 0.0740 | 0.7857 | 0.77 | 0.7778 | 0.5 |
| No log | 1.17 | 500 | 0.0368 | 0.7282 | 0.375 | 0.4950 | 0.6 |
| No log | 1.17 | 500 | 0.0239 | 0.8989 | 0.845 | 0.8711 | 0.3000 |
| No log | 1.17 | 500 | 0.1159 | 0.7014 | 0.775 | 0.7363 | 0.4 |
| No log | 1.17 | 500 | 0.0751 | 0.5584 | 0.86 | 0.6772 | 0.2 |
| No log | 1.17 | 500 | 0.0656 | 0.6299 | 0.8 | 0.7048 | 0.1 |
| No log | 1.17 | 500 | 0.1003 | 0.7154 | 0.905 | 0.7991 | 0.064 |
| No log | 1.17 | 500 | 0.0434 | 0.76 | 0.855 | 0.8047 | 0.4 |
| No log | 1.17 | 500 | 0.0400 | 0.7788 | 0.88 | 0.8263 | 0.3000 |
| No log | 1.17 | 500 | 0.0465 | 0.5217 | 0.5427 | 0.5320 | 0.3000 |
| No log | 1.17 | 500 | 0.0524 | 0.7938 | 0.77 | 0.7817 | 0.4 |
| No log | 1.17 | 500 | 0.0505 | 0.7778 | 0.7636 | 0.7706 | 0.6 |
| No log | 1.17 | 500 | 0.0388 | 0.8194 | 0.93 | 0.8712 | 0.2 |
| No log | 1.17 | 500 | 0.0388 | 0.8194 | 0.93 | 0.8712 | 0.2 |
| No log | 1.17 | 500 | 0.0363 | 0.8068 | 0.835 | 0.8206 | 0.4 |
| No log | 1.17 | 500 | 0.0502 | 0.8309 | 0.86 | 0.8452 | 0.4 |
| No log | 1.17 | 500 | 0.0344 | 0.7739 | 0.89 | 0.8279 | 0.2 |
| No log | 1.17 | 500 | 0.0394 | 0.8182 | 0.81 | 0.8141 | 0.5 |
| No log | 1.17 | 500 | 0.0855 | 0.6681 | 0.795 | 0.7260 | 0.2 |
| No log | 1.17 | 500 | 0.0449 | 0.7835 | 0.76 | 0.7716 | 0.5 |
| No log | 1.17 | 500 | 0.0423 | 0.8161 | 0.91 | 0.8605 | 0.3000 |
| No log | 1.17 | 500 | 0.1496 | 0.6425 | 0.665 | 0.6536 | 0.2 |
| No log | 1.17 | 500 | 0.0487 | 0.8307 | 0.785 | 0.8072 | 0.5 |
| No log | 1.17 | 500 | 0.1076 | 0.7179 | 0.8442 | 0.7760 | 0.092 |
| No log | 1.17 | 500 | 0.0332 | 0.86 | 0.86 | 0.8600 | 0.4 |
| No log | 1.17 | 500 | 0.0332 | 0.86 | 0.86 | 0.8600 | 0.4 |
| No log | 1.17 | 500 | 0.0265 | 0.8043 | 0.8043 | 0.8043 | 0.5 |
| No log | 1.17 | 500 | 0.0265 | 0.8043 | 0.8043 | 0.8043 | 0.5 |
| No log | 1.17 | 500 | 0.0368 | 0.8137 | 0.83 | 0.8218 | 0.3000 |
| No log | 1.17 | 500 | 0.0423 | 0.5543 | 0.485 | 0.5173 | 0.5 |
| No log | 1.17 | 500 | 0.0511 | 0.75 | 0.4615 | 0.5714 | 0.7000 |
| No log | 1.17 | 500 | 0.0332 | 0.7387 | 0.82 | 0.7773 | 0.5 |
| No log | 1.17 | 500 | 0.0372 | 0.6548 | 0.645 | 0.6499 | 0.5 |
| No log | 1.17 | 500 | 0.0505 | 0.7131 | 0.845 | 0.7735 | 0.2 |
| No log | 1.17 | 500 | 0.0383 | 0.7864 | 0.81 | 0.7980 | 0.6 |
| No log | 1.17 | 500 | 0.0563 | 0.7671 | 0.84 | 0.8019 | 0.4 |
| No log | 1.17 | 500 | 0.0991 | 0.4272 | 0.4231 | 0.4251 | 0.3000 |
| No log | 1.17 | 500 | 0.0792 | 0.7333 | 0.825 | 0.7765 | 0.3000 |
| No log | 1.17 | 500 | 0.0523 | 0.7333 | 0.88 | 0.8 | 0.3000 |
| No log | 1.17 | 500 | 0.0913 | 0.7784 | 0.755 | 0.7665 | 0.8 |
| No log | 1.17 | 500 | 0.1089 | 0.6964 | 0.86 | 0.7696 | 0.4 |
| No log | 1.17 | 500 | 0.0702 | 0.6508 | 0.82 | 0.7257 | 0.3000 |
| No log | 1.17 | 500 | 0.1226 | 0.7676 | 0.925 | 0.8390 | 0.063 |
| No log | 1.17 | 500 | 0.1045 | 0.5249 | 0.685 | 0.5944 | 0.0260 |
| No log | 1.17 | 500 | 0.0664 | 0.5 | 0.575 | 0.5349 | 0.3000 |
| No log | 1.17 | 500 | 0.0686 | 0.7869 | 0.96 | 0.8649 | 0.9 |
| No log | 1.17 | 500 | 0.0368 | 0.5989 | 0.56 | 0.5788 | 0.2 |
| No log | 1.17 | 500 | 0.0556 | 0.8 | 0.86 | 0.8289 | 0.3000 |
| No log | 1.17 | 500 | 0.0615 | 0.6471 | 0.9167 | 0.7586 | 0.2 |
| No log | 1.17 | 500 | 0.0465 | 0.7554 | 0.88 | 0.8129 | 0.3000 |
| No log | 1.17 | 500 | 0.0405 | 0.8169 | 0.87 | 0.8426 | 0.4 |
| No log | 1.17 | 500 | 0.0623 | 0.7019 | 0.73 | 0.7157 | 0.3000 |
| No log | 1.17 | 500 | 0.0486 | 0.7810 | 0.82 | 0.8 | 0.4 |
| No log | 1.17 | 500 | 0.0480 | 0.5637 | 0.575 | 0.5693 | 0.5 |
| No log | 1.17 | 500 | 0.0290 | 0.8688 | 0.96 | 0.9121 | 0.098 |
| No log | 1.17 | 500 | 0.0970 | 0.4194 | 0.52 | 0.4643 | 0.5 |
| No log | 1.17 | 500 | 0.0513 | 0.7925 | 0.8442 | 0.8175 | 0.4 |
| No log | 1.17 | 500 | 0.0983 | 0.6667 | 0.4854 | 0.5618 | 0.4 |
| No log | 1.17 | 500 | 0.0582 | 0.5820 | 0.745 | 0.6535 | 0.1 |
| No log | 1.17 | 500 | 0.0387 | 0.8634 | 0.885 | 0.8741 | 0.5 |
| No log | 1.17 | 500 | 0.0582 | 0.8424 | 0.855 | 0.8486 | 0.3000 |
| No log | 1.17 | 500 | 0.0582 | 0.8424 | 0.855 | 0.8486 | 0.3000 |
| No log | 1.17 | 500 | 0.0432 | 0.6129 | 0.76 | 0.6786 | 0.2 |
| No log | 1.17 | 500 | 0.0626 | 0.8153 | 0.9141 | 0.8619 | 0.4 |
| No log | 1.17 | 500 | 0.0468 | 0.6681 | 0.7588 | 0.7106 | 0.4 |
| No log | 1.17 | 500 | 0.0531 | 0.7511 | 0.83 | 0.7886 | 0.4 |
| No log | 1.17 | 500 | 0.0462 | 0.7961 | 0.82 | 0.8079 | 0.3000 |
| No log | 1.17 | 500 | 0.0398 | 0.7447 | 0.875 | 0.8046 | 0.2 |
| No log | 1.17 | 500 | 0.0500 | 0.755 | 0.755 | 0.755 | 0.7000 |
| No log | 1.17 | 500 | 0.0513 | 0.7805 | 0.8 | 0.7901 | 0.4 |
| No log | 1.17 | 500 | 0.0376 | 0.8402 | 0.92 | 0.8783 | 0.3000 |
| No log | 1.17 | 500 | 0.0478 | 0.7824 | 0.755 | 0.7684 | 0.5 |
| No log | 1.17 | 500 | 0.0306 | 0.8865 | 0.82 | 0.8519 | 0.5 |
| No log | 1.17 | 500 | 0.0631 | 0.7617 | 0.815 | 0.7874 | 0.3000 |
| No log | 1.17 | 500 | 0.0463 | 0.5 | 0.625 | 0.5556 | 0.2 |
| No log | 1.17 | 500 | 0.0563 | 0.5103 | 0.745 | 0.6057 | 0.4 |
| No log | 1.17 | 500 | 0.0443 | 0.7682 | 0.845 | 0.8048 | 0.2 |
| No log | 1.17 | 500 | 0.0644 | 0.5904 | 0.8 | 0.6794 | 0.6 |
| No log | 1.17 | 500 | 0.0595 | 0.7328 | 0.85 | 0.7870 | 0.3000 |
| No log | 1.17 | 500 | 0.0389 | 0.7717 | 0.845 | 0.8067 | 0.3000 |
| No log | 1.17 | 500 | 0.1053 | 0.5017 | 0.73 | 0.5947 | 0.3000 |
| No log | 1.17 | 500 | 0.0697 | 0.8071 | 0.795 | 0.8010 | 0.5 |
| No log | 1.17 | 500 | 0.0487 | 0.6523 | 0.835 | 0.7325 | 0.4 |
| No log | 1.17 | 500 | 0.0487 | 0.6523 | 0.835 | 0.7325 | 0.4 |
| No log | 1.17 | 500 | 0.0487 | 0.6523 | 0.835 | 0.7325 | 0.4 |
| No log | 1.17 | 500 | 0.0487 | 0.6523 | 0.835 | 0.7325 | 0.4 |
| No log | 1.17 | 500 | 0.1022 | 0.5931 | 0.6111 | 0.6020 | 0.2 |
| No log | 1.17 | 500 | 0.0560 | 0.7217 | 0.8384 | 0.7757 | 0.4 |
| No log | 1.17 | 500 | 0.0189 | 0.9327 | 0.97 | 0.9510 | 0.3000 |
| No log | 1.17 | 500 | 0.0020 | 0.9901 | 1.0 | 0.9950 | 0.3000 |
| No log | 1.17 | 500 | 0.0028 | 0.995 | 0.995 | 0.995 | 0.5 |
| No log | 1.17 | 500 | 0.0003 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 1.17 | 500 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.4 |
| No log | 1.17 | 500 | 0.0005 | 1.0 | 0.995 | 0.9975 | 0.4 |
| No log | 1.17 | 500 | 0.0006 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 1.17 | 500 | 0.0036 | 0.99 | 0.99 | 0.99 | 0.9 |
| No log | 1.17 | 500 | 0.0009 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 1.17 | 500 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.5 |
| No log | 1.17 | 500 | 0.0185 | 0.9786 | 0.915 | 0.9457 | 0.4 |
| No log | 1.17 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.17 | 500 | 0.0330 | 0.8973 | 0.83 | 0.8623 | 0.2 |
| No log | 1.17 | 500 | 0.0017 | 0.9901 | 1.0 | 0.9950 | 0.3000 |
| No log | 1.17 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.4 |
| No log | 1.17 | 500 | 0.0038 | 0.99 | 0.99 | 0.99 | 0.9 |
| No log | 1.17 | 500 | 0.0057 | 0.9703 | 0.98 | 0.9751 | 0.7000 |
| No log | 1.17 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.9 |
| No log | 1.17 | 500 | 0.0004 | 0.9950 | 1.0 | 0.9975 | 0.3000 |
| No log | 1.17 | 500 | 0.0025 | 0.9900 | 0.995 | 0.9925 | 0.2 |
| No log | 1.17 | 500 | 0.0017 | 0.9950 | 1.0 | 0.9975 | 0.7000 |
| No log | 1.17 | 500 | 0.0097 | 0.9695 | 0.955 | 0.9622 | 0.3000 |
| No log | 1.17 | 500 | 0.1400 | 0.4292 | 0.47 | 0.4487 | 0.8 |
| No log | 1.17 | 500 | 0.1016 | 0.2529 | 0.5931 | 0.3546 | 0.2 |
| No log | 1.17 | 500 | 0.1260 | 0.5954 | 0.78 | 0.6753 | 0.2 |
| No log | 1.17 | 500 | 0.1194 | 0.5873 | 0.74 | 0.6549 | 0.4 |
| No log | 1.76 | 750 | 0.0404 | 0.9728 | 0.895 | 0.9323 | 0.6 |
| No log | 1.76 | 750 | 0.0125 | 0.9235 | 0.905 | 0.9141 | 0.6 |
| No log | 1.76 | 750 | 0.0329 | 0.8545 | 0.91 | 0.8814 | 0.3000 |
| No log | 1.76 | 750 | 0.0157 | 0.8930 | 0.96 | 0.9253 | 0.2 |
| No log | 1.76 | 750 | 0.0348 | 0.9474 | 0.9 | 0.9231 | 0.5 |
| No log | 1.76 | 750 | 0.0094 | 0.9754 | 0.99 | 0.9826 | 0.7000 |
| No log | 1.76 | 750 | 0.0140 | 0.9588 | 0.9347 | 0.9466 | 0.7000 |
| No log | 1.76 | 750 | 0.0102 | 0.98 | 0.98 | 0.98 | 0.6 |
| No log | 1.76 | 750 | 0.0131 | 0.9476 | 0.995 | 0.9707 | 0.7000 |
| No log | 1.76 | 750 | 0.0294 | 0.9126 | 0.94 | 0.9261 | 0.3000 |
| No log | 1.76 | 750 | 0.0082 | 0.9662 | 1.0 | 0.9828 | 0.5 |
| No log | 1.76 | 750 | 0.0131 | 0.9415 | 0.965 | 0.9531 | 0.6 |
| No log | 1.76 | 750 | 0.0071 | 0.9615 | 1.0 | 0.9804 | 0.4 |
| No log | 1.76 | 750 | 0.0192 | 0.9522 | 0.995 | 0.9731 | 0.7000 |
| No log | 1.76 | 750 | 0.0138 | 0.9517 | 0.985 | 0.9681 | 0.6 |
| No log | 1.76 | 750 | 0.0102 | 0.9384 | 0.99 | 0.9635 | 0.5 |
| No log | 1.76 | 750 | 0.0098 | 0.9797 | 0.9797 | 0.9797 | 0.9 |
| No log | 1.76 | 750 | 0.0123 | 0.9336 | 0.985 | 0.9586 | 0.5 |
| No log | 1.76 | 750 | 0.0446 | 0.9043 | 0.8543 | 0.8786 | 0.7000 |
| No log | 1.76 | 750 | 0.0163 | 0.9259 | 1.0 | 0.9615 | 0.069 |
| No log | 1.76 | 750 | 0.0124 | 0.9299 | 0.995 | 0.9614 | 0.065 |
| No log | 1.76 | 750 | 0.0489 | 0.9592 | 0.94 | 0.9495 | 0.2 |
| No log | 1.76 | 750 | 0.0046 | 1.0 | 0.945 | 0.9717 | 0.7000 |
| No log | 1.76 | 750 | 0.0064 | 0.9846 | 0.9746 | 0.9796 | 0.3000 |
| No log | 1.76 | 750 | 0.0188 | 0.9476 | 0.995 | 0.9707 | 0.2 |
| No log | 1.76 | 750 | 0.0541 | 0.8844 | 0.88 | 0.8822 | 0.4 |
| No log | 1.76 | 750 | 0.0062 | 0.9190 | 0.9698 | 0.9438 | 0.4 |
| No log | 1.76 | 750 | 0.0214 | 0.9320 | 0.96 | 0.9458 | 0.6 |
| No log | 1.76 | 750 | 0.0160 | 0.9314 | 0.95 | 0.9406 | 0.5 |
| No log | 1.76 | 750 | 0.0153 | 0.9476 | 0.995 | 0.9707 | 0.083 |
| No log | 1.76 | 750 | 0.0317 | 0.9412 | 0.96 | 0.9505 | 0.3000 |
| No log | 1.76 | 750 | 0.0255 | 0.9336 | 0.985 | 0.9586 | 0.5 |
| No log | 1.76 | 750 | 0.0152 | 0.9409 | 0.955 | 0.9479 | 0.2 |
| No log | 1.76 | 750 | 0.0111 | 0.9709 | 1.0 | 0.9852 | 0.3000 |
| No log | 1.76 | 750 | 0.0106 | 0.97 | 0.97 | 0.97 | 0.4 |
| No log | 1.76 | 750 | 0.0793 | 0.8684 | 0.825 | 0.8462 | 0.4 |
| No log | 1.76 | 750 | 0.0102 | 0.9378 | 0.98 | 0.9584 | 0.2 |
| No log | 1.76 | 750 | 0.0183 | 0.98 | 0.98 | 0.98 | 0.3000 |
| No log | 1.76 | 750 | 0.1075 | 0.6990 | 0.6884 | 0.6937 | 0.089 |
| No log | 1.76 | 750 | 0.0407 | 0.9485 | 0.9246 | 0.9364 | 0.2 |
| No log | 1.76 | 750 | 0.0508 | 0.8274 | 0.935 | 0.8779 | 0.4 |
| No log | 1.76 | 750 | 0.0113 | 0.9645 | 0.95 | 0.9572 | 0.2 |
| No log | 1.76 | 750 | 0.0101 | 0.9756 | 1.0 | 0.9877 | 0.2 |
| No log | 1.76 | 750 | 0.0055 | 0.97 | 0.9749 | 0.9724 | 0.5 |
| No log | 1.76 | 750 | 0.0066 | 0.9559 | 0.975 | 0.9653 | 0.6 |
| No log | 1.76 | 750 | 0.0039 | 0.9519 | 0.99 | 0.9706 | 0.6 |
| No log | 1.76 | 750 | 0.0097 | 0.9569 | 1.0 | 0.9780 | 0.6 |
| No log | 1.76 | 750 | 0.0322 | 0.8930 | 0.96 | 0.9253 | 0.5 |
| No log | 1.76 | 750 | 0.0133 | 0.9804 | 1.0 | 0.9901 | 0.2 |
| No log | 1.76 | 750 | 0.0250 | 0.9563 | 0.985 | 0.9704 | 0.3000 |
| No log | 1.76 | 750 | 0.0157 | 0.9847 | 0.965 | 0.9747 | 0.9 |
| No log | 1.76 | 750 | 0.0045 | 0.9366 | 0.9746 | 0.9552 | 0.6 |
| No log | 1.76 | 750 | 0.0824 | 0.7308 | 0.855 | 0.7880 | 0.0880 |
| No log | 1.76 | 750 | 0.0654 | 0.8599 | 0.89 | 0.8747 | 0.2 |
| No log | 1.76 | 750 | 0.0104 | 0.9660 | 0.995 | 0.9803 | 0.6 |
| No log | 1.76 | 750 | 0.0148 | 0.9524 | 1.0 | 0.9756 | 0.067 |
| No log | 1.76 | 750 | 0.0991 | 0.8984 | 0.84 | 0.8682 | 0.3000 |
| No log | 1.76 | 750 | 0.0069 | 0.9709 | 1.0 | 0.9852 | 0.3000 |
| No log | 1.76 | 750 | 0.1156 | 0.9353 | 0.795 | 0.8595 | 0.7000 |
| No log | 1.76 | 750 | 0.0117 | 0.9565 | 0.99 | 0.9730 | 0.8 |
| No log | 1.76 | 750 | 0.0094 | 0.9660 | 0.995 | 0.9803 | 0.3000 |
| No log | 1.76 | 750 | 0.0074 | 0.9598 | 0.955 | 0.9574 | 0.9 |
| No log | 1.76 | 750 | 0.0493 | 0.8990 | 0.935 | 0.9167 | 0.4 |
| No log | 1.76 | 750 | 0.0071 | 0.9660 | 0.995 | 0.9803 | 0.2 |
| No log | 1.76 | 750 | 0.0115 | 0.9614 | 0.995 | 0.9779 | 0.8 |
| No log | 1.76 | 750 | 0.0095 | 0.9429 | 0.99 | 0.9659 | 0.8 |
| No log | 1.76 | 750 | 0.0146 | 0.9567 | 0.995 | 0.9755 | 0.076 |
| No log | 1.76 | 750 | 0.0078 | 0.9709 | 1.0 | 0.9852 | 0.5 |
| No log | 1.76 | 750 | 0.0307 | 0.9344 | 0.855 | 0.8930 | 0.5 |
| No log | 1.76 | 750 | 0.0535 | 0.9031 | 0.885 | 0.8939 | 0.4 |
| No log | 1.76 | 750 | 0.0094 | 0.9282 | 0.97 | 0.9487 | 0.2 |
| No log | 1.76 | 750 | 0.0607 | 0.7906 | 0.925 | 0.8525 | 0.4 |
| No log | 1.76 | 750 | 0.0112 | 0.9479 | 1.0 | 0.9732 | 0.054 |
| No log | 1.76 | 750 | 0.0169 | 0.9648 | 0.96 | 0.9624 | 0.8 |
| No log | 1.76 | 750 | 0.0157 | 0.8597 | 0.95 | 0.9026 | 0.5 |
| No log | 1.76 | 750 | 0.0074 | 0.9406 | 0.95 | 0.9453 | 0.3000 |
| No log | 1.76 | 750 | 0.0185 | 0.9517 | 0.985 | 0.9681 | 0.4 |
| No log | 1.76 | 750 | 0.0135 | 0.9543 | 0.94 | 0.9471 | 0.4 |
| No log | 1.76 | 750 | 0.0519 | 0.9531 | 0.915 | 0.9337 | 0.2 |
| No log | 1.76 | 750 | 0.0223 | 0.8319 | 0.94 | 0.8826 | 0.2 |
| No log | 1.76 | 750 | 0.0676 | 0.7434 | 0.8485 | 0.7925 | 0.015 |
| No log | 1.76 | 750 | 0.0264 | 0.96 | 0.96 | 0.96 | 0.2 |
| No log | 1.76 | 750 | 0.1184 | 0.8019 | 0.83 | 0.8157 | 0.3000 |
| No log | 1.76 | 750 | 0.0199 | 0.8812 | 0.89 | 0.8856 | 0.5 |
| No log | 1.76 | 750 | 0.0644 | 0.7681 | 0.795 | 0.7813 | 0.4 |
| No log | 1.76 | 750 | 0.0214 | 0.8806 | 0.885 | 0.8828 | 0.4 |
| No log | 1.76 | 750 | 0.0724 | 0.8442 | 0.84 | 0.8421 | 0.3000 |
| No log | 1.76 | 750 | 0.0876 | 0.7848 | 0.875 | 0.8274 | 0.4 |
| No log | 1.76 | 750 | 0.0605 | 0.5897 | 0.5779 | 0.5838 | 0.4 |
| No log | 1.76 | 750 | 0.0508 | 0.7922 | 0.915 | 0.8492 | 0.3000 |
| No log | 1.76 | 750 | 0.0460 | 0.8364 | 0.895 | 0.8647 | 0.4 |
| No log | 1.76 | 750 | 0.0955 | 0.7522 | 0.865 | 0.8047 | 0.3000 |
| No log | 1.76 | 750 | 0.0437 | 0.8607 | 0.865 | 0.8628 | 0.6 |
| No log | 1.76 | 750 | 0.0255 | 0.8719 | 0.885 | 0.8784 | 0.5 |
| No log | 1.76 | 750 | 0.0650 | 0.7216 | 0.92 | 0.8088 | 0.2 |
| No log | 1.76 | 750 | 0.0583 | 0.9115 | 0.875 | 0.8929 | 0.6 |
| No log | 1.76 | 750 | 0.0549 | 0.9040 | 0.895 | 0.8995 | 0.6 |
| No log | 1.76 | 750 | 0.0462 | 0.7713 | 0.86 | 0.8132 | 0.4 |
| No log | 1.76 | 750 | 0.0340 | 0.8009 | 0.8894 | 0.8429 | 0.4 |
| No log | 1.76 | 750 | 0.0608 | 0.7013 | 0.81 | 0.7517 | 0.4 |
| No log | 1.76 | 750 | 0.0697 | 0.75 | 0.825 | 0.7857 | 0.5 |
| No log | 1.76 | 750 | 0.0547 | 0.8462 | 0.88 | 0.8627 | 0.4 |
| No log | 1.76 | 750 | 0.0434 | 0.8482 | 0.81 | 0.8286 | 0.5 |
| No log | 1.76 | 750 | 0.1335 | 0.8116 | 0.84 | 0.8256 | 0.2 |
| No log | 1.76 | 750 | 0.0240 | 0.8953 | 0.77 | 0.8280 | 0.7000 |
| No log | 1.76 | 750 | 0.0379 | 0.8947 | 0.8629 | 0.8786 | 0.3000 |
| No log | 1.76 | 750 | 0.0696 | 0.8585 | 0.88 | 0.8691 | 0.4 |
| No log | 1.76 | 750 | 0.0798 | 0.7240 | 0.8040 | 0.7619 | 0.3000 |
| No log | 1.76 | 750 | 0.0235 | 0.7933 | 0.825 | 0.8088 | 0.3000 |
| No log | 1.76 | 750 | 0.0809 | 0.7887 | 0.84 | 0.8136 | 0.5 |
| No log | 1.76 | 750 | 0.0347 | 0.8071 | 0.795 | 0.8010 | 0.4 |
| No log | 1.76 | 750 | 0.0643 | 0.7629 | 0.885 | 0.8194 | 0.3000 |
| No log | 1.76 | 750 | 0.0710 | 0.8358 | 0.84 | 0.8379 | 0.5 |
| No log | 1.76 | 750 | 0.1096 | 0.7913 | 0.815 | 0.8030 | 0.5 |
| No log | 1.76 | 750 | 0.0757 | 0.8167 | 0.735 | 0.7737 | 0.4 |
| No log | 1.76 | 750 | 0.0617 | 0.7840 | 0.835 | 0.8087 | 0.4 |
| No log | 1.76 | 750 | 0.0502 | 0.7712 | 0.9146 | 0.8368 | 0.3000 |
| No log | 1.76 | 750 | 0.1509 | 0.6026 | 0.925 | 0.7298 | 0.035 |
| No log | 1.76 | 750 | 0.0777 | 0.472 | 0.59 | 0.5244 | 0.3000 |
| No log | 1.76 | 750 | 0.0977 | 0.8901 | 0.85 | 0.8696 | 0.4 |
| No log | 1.76 | 750 | 0.2090 | 0.3256 | 0.8442 | 0.4699 | 0.002 |
| No log | 1.76 | 750 | 0.0802 | 0.8902 | 0.7739 | 0.8280 | 0.4 |
| No log | 1.76 | 750 | 0.0825 | 0.7804 | 0.835 | 0.8068 | 0.5 |
| No log | 1.76 | 750 | 0.0247 | 0.9358 | 0.8838 | 0.9091 | 0.3000 |
| No log | 1.76 | 750 | 0.0693 | 0.8905 | 0.935 | 0.9122 | 0.3000 |
| No log | 1.76 | 750 | 0.0263 | 0.8731 | 0.8643 | 0.8687 | 0.5 |
| No log | 1.76 | 750 | 0.0314 | 0.8413 | 0.795 | 0.8175 | 0.6 |
| No log | 1.76 | 750 | 0.0409 | 0.6844 | 0.77 | 0.7247 | 0.4 |
| No log | 1.76 | 750 | 0.0626 | 0.8485 | 0.84 | 0.8442 | 0.6 |
| No log | 1.76 | 750 | 0.0607 | 0.6820 | 0.815 | 0.7426 | 0.4 |
| No log | 1.76 | 750 | 0.0648 | 0.9175 | 0.945 | 0.9310 | 0.3000 |
| No log | 1.76 | 750 | 0.0606 | 0.8293 | 0.85 | 0.8395 | 0.5 |
| No log | 1.76 | 750 | 0.1217 | 0.7069 | 0.82 | 0.7593 | 0.4 |
| No log | 1.76 | 750 | 0.0208 | 0.8333 | 0.7538 | 0.7916 | 0.7000 |
| No log | 1.76 | 750 | 0.1449 | 0.5784 | 0.7789 | 0.6638 | 0.048 |
| No log | 1.76 | 750 | 0.0940 | 0.8842 | 0.84 | 0.8615 | 0.5 |
| No log | 1.76 | 750 | 0.0492 | 0.8 | 0.9 | 0.8471 | 0.4 |
| No log | 1.76 | 750 | 0.0610 | 0.8551 | 0.915 | 0.8841 | 0.4 |
| No log | 1.76 | 750 | 0.0945 | 0.8247 | 0.8 | 0.8122 | 0.3000 |
| No log | 1.76 | 750 | 0.0541 | 0.9029 | 0.79 | 0.8427 | 0.7000 |
| No log | 1.76 | 750 | 0.1256 | 0.8667 | 0.78 | 0.8211 | 0.5 |
| No log | 1.76 | 750 | 0.0367 | 0.8551 | 0.885 | 0.8698 | 0.6 |
| No log | 1.76 | 750 | 0.0566 | 0.8821 | 0.86 | 0.8709 | 0.5 |
| No log | 1.76 | 750 | 0.0169 | 0.8706 | 0.875 | 0.8728 | 0.6 |
| No log | 1.76 | 750 | 0.0930 | 0.716 | 0.895 | 0.7956 | 0.3000 |
| No log | 1.76 | 750 | 0.0373 | 0.8219 | 0.9 | 0.8592 | 0.5 |
| No log | 1.76 | 750 | 0.0591 | 0.8279 | 0.89 | 0.8578 | 0.4 |
| No log | 1.76 | 750 | 0.0366 | 0.8796 | 0.84 | 0.8593 | 0.6 |
| No log | 1.76 | 750 | 0.0839 | 0.8299 | 0.805 | 0.8173 | 0.5 |
| No log | 1.76 | 750 | 0.0345 | 0.9086 | 0.895 | 0.9018 | 0.6 |
| No log | 1.76 | 750 | 0.0666 | 0.6256 | 0.71 | 0.6651 | 0.2 |
| No log | 1.76 | 750 | 0.1225 | 0.7861 | 0.68 | 0.7292 | 0.5 |
| No log | 1.76 | 750 | 0.0279 | 0.8730 | 0.825 | 0.8483 | 0.4 |
| No log | 1.76 | 750 | 0.0679 | 0.7725 | 0.9 | 0.8314 | 0.4 |
| No log | 1.76 | 750 | 0.0876 | 0.7617 | 0.895 | 0.8230 | 0.3000 |
| No log | 1.76 | 750 | 0.0518 | 0.8009 | 0.885 | 0.8409 | 0.5 |
| No log | 1.76 | 750 | 0.0227 | 0.8731 | 0.86 | 0.8665 | 0.6 |
| No log | 1.76 | 750 | 0.0171 | 0.8451 | 0.9 | 0.8717 | 0.3000 |
| No log | 1.76 | 750 | 0.1085 | 0.8010 | 0.765 | 0.7826 | 0.6 |
| No log | 1.76 | 750 | 0.0577 | 0.6376 | 0.73 | 0.6807 | 0.3000 |
| No log | 1.76 | 750 | 0.0764 | 0.8520 | 0.95 | 0.8983 | 0.0720 |
| No log | 1.76 | 750 | 0.1073 | 0.4710 | 0.61 | 0.5316 | 0.085 |
| No log | 1.76 | 750 | 0.0469 | 0.7325 | 0.8990 | 0.8073 | 0.096 |
| No log | 1.76 | 750 | 0.0669 | 0.8967 | 0.825 | 0.8594 | 0.4 |
| No log | 1.76 | 750 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.002 |
| No log | 1.76 | 750 | 0.0115 | 0.7619 | 0.8934 | 0.8224 | 0.7000 |
| No log | 1.76 | 750 | 0.0043 | 0.9238 | 0.97 | 0.9463 | 0.5 |
| No log | 1.76 | 750 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.006 |
| No log | 1.76 | 750 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.0440 |
| No log | 1.76 | 750 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.049 |
| No log | 1.76 | 750 | 0.0027 | 0.9947 | 1.0 | 0.9973 | 0.069 |
| No log | 1.76 | 750 | 0.0013 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 1.76 | 750 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 1.76 | 750 | 0.0029 | 0.995 | 0.995 | 0.995 | 0.8 |
| No log | 1.76 | 750 | 0.0008 | 1.0 | 1.0 | 1.0 | 0.5 |
| No log | 1.76 | 750 | 0.0083 | 0.975 | 0.975 | 0.975 | 0.017 |
| No log | 1.76 | 750 | 0.0140 | 0.9946 | 0.925 | 0.9585 | 0.2 |
| No log | 1.76 | 750 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.76 | 750 | 0.0151 | 0.9689 | 0.935 | 0.9517 | 0.3000 |
| No log | 1.76 | 750 | 0.0013 | 0.9950 | 1.0 | 0.9975 | 0.0510 |
| No log | 1.76 | 750 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.017 |
| No log | 1.76 | 750 | 0.0034 | 0.9949 | 0.985 | 0.9899 | 0.2 |
| No log | 1.76 | 750 | 0.0012 | 1.0 | 0.985 | 0.9924 | 0.8 |
| No log | 1.76 | 750 | 0.0032 | 0.9614 | 0.995 | 0.9779 | 0.2 |
| No log | 1.76 | 750 | 0.0372 | 0.9162 | 0.82 | 0.8654 | 0.6 |
| No log | 1.76 | 750 | 0.0007 | 0.9950 | 1.0 | 0.9975 | 0.038 |
| No log | 1.76 | 750 | 0.0018 | 1.0 | 0.98 | 0.9899 | 0.3000 |
| No log | 1.76 | 750 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.017 |
| No log | 1.76 | 750 | 0.0005 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 1.76 | 750 | 0.0013 | 0.995 | 0.995 | 0.995 | 0.4 |
| No log | 1.76 | 750 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.76 | 750 | 0.0046 | 0.9333 | 0.98 | 0.9561 | 0.3000 |
| No log | 1.76 | 750 | 0.0019 | 0.9901 | 1.0 | 0.9950 | 0.049 |
| No log | 1.76 | 750 | 0.0154 | 0.9846 | 0.96 | 0.9722 | 0.0370 |
| No log | 1.76 | 750 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.76 | 750 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.004 |
| No log | 1.76 | 750 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.0090 |
| No log | 1.76 | 750 | 0.0008 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.76 | 750 | 0.0023 | 0.9852 | 1.0 | 0.9926 | 0.2 |
| No log | 1.76 | 750 | 0.0005 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 1.76 | 750 | 0.0038 | 0.9792 | 1.0 | 0.9895 | 0.3000 |
| No log | 1.76 | 750 | 0.0038 | 0.9174 | 1.0 | 0.9569 | 0.093 |
| No log | 1.76 | 750 | 0.0037 | 0.9804 | 1.0 | 0.9901 | 0.4 |
| No log | 1.76 | 750 | 0.0013 | 0.995 | 0.995 | 0.995 | 0.4 |
| No log | 1.76 | 750 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 1.76 | 750 | 0.0006 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 1.76 | 750 | 0.0019 | 0.9949 | 0.985 | 0.9899 | 0.2 |
| No log | 1.76 | 750 | 0.0176 | 0.9275 | 0.9697 | 0.9481 | 0.7000 |
| No log | 1.76 | 750 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 1.76 | 750 | 0.0019 | 0.9900 | 0.995 | 0.9925 | 0.6 |
| No log | 1.76 | 750 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.007 |
| No log | 1.76 | 750 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.76 | 750 | 0.0147 | 0.9390 | 1.0 | 0.9685 | 0.0140 |
| No log | 1.76 | 750 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 1.76 | 750 | 0.0147 | 0.9474 | 0.9 | 0.9231 | 0.5 |
| No log | 1.76 | 750 | 0.0020 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.76 | 750 | 0.0006 | 0.9950 | 1.0 | 0.9975 | 0.6 |
| No log | 1.76 | 750 | 0.0012 | 0.9901 | 1.0 | 0.9950 | 0.2 |
| No log | 1.76 | 750 | 0.0007 | 0.9950 | 1.0 | 0.9975 | 0.5 |
| No log | 1.76 | 750 | 0.0110 | 0.7787 | 0.985 | 0.8698 | 0.039 |
| No log | 1.76 | 750 | 0.0070 | 1.0 | 0.99 | 0.9950 | 0.0090 |
| No log | 1.76 | 750 | 0.0060 | 0.9704 | 0.985 | 0.9777 | 0.097 |
| No log | 1.76 | 750 | 0.0164 | 0.7285 | 0.8173 | 0.7703 | 0.6 |
| No log | 1.76 | 750 | 0.0071 | 0.9091 | 0.95 | 0.9291 | 0.4 |
| No log | 1.76 | 750 | 0.0358 | 0.9227 | 0.895 | 0.9086 | 0.5 |
| No log | 1.76 | 750 | 0.1316 | 0.6324 | 0.7679 | 0.6935 | 0.096 |
| No log | 1.76 | 750 | 0.0057 | 0.9330 | 0.975 | 0.9535 | 0.4 |
| No log | 1.76 | 750 | 0.0870 | 0.7854 | 0.8564 | 0.8193 | 0.3000 |
| No log | 1.76 | 750 | 0.0280 | 0.8486 | 0.925 | 0.8852 | 0.3000 |
| No log | 1.76 | 750 | 0.0276 | 0.9477 | 0.815 | 0.8763 | 0.6 |
| No log | 1.76 | 750 | 0.0259 | 0.8889 | 0.92 | 0.9042 | 0.3000 |
| No log | 1.76 | 750 | 0.0252 | 0.8767 | 0.96 | 0.9165 | 0.4 |
| No log | 1.76 | 750 | 0.0236 | 0.9301 | 0.865 | 0.8964 | 0.5 |
| No log | 1.76 | 750 | 0.0321 | 0.875 | 0.84 | 0.8571 | 0.4 |
| No log | 1.76 | 750 | 0.0192 | 0.8325 | 0.845 | 0.8387 | 0.5 |
| No log | 1.76 | 750 | 0.0392 | 0.8531 | 0.755 | 0.8011 | 0.6 |
| No log | 1.76 | 750 | 0.0475 | 0.8208 | 0.87 | 0.8447 | 0.3000 |
| No log | 1.76 | 750 | 0.0024 | 0.9950 | 1.0 | 0.9975 | 0.054 |
| No log | 1.76 | 750 | 0.0321 | 0.8152 | 0.86 | 0.8370 | 0.5 |
| No log | 1.76 | 750 | 0.0257 | 0.8082 | 0.885 | 0.8449 | 0.3000 |
| No log | 1.76 | 750 | 0.0267 | 0.8325 | 0.8283 | 0.8304 | 0.4 |
| No log | 1.76 | 750 | 0.0650 | 0.6822 | 0.8090 | 0.7402 | 0.3000 |
| No log | 1.76 | 750 | 0.0239 | 0.8624 | 0.815 | 0.8380 | 0.4 |
| No log | 1.76 | 750 | 0.0189 | 0.8558 | 0.92 | 0.8867 | 0.2 |
| No log | 1.76 | 750 | 0.0062 | 0.9552 | 0.96 | 0.9576 | 0.4 |
| No log | 1.76 | 750 | 0.0308 | 0.7763 | 0.85 | 0.8115 | 0.5 |
| No log | 1.76 | 750 | 0.0308 | 0.7991 | 0.895 | 0.8443 | 0.2 |
| No log | 1.76 | 750 | 0.0294 | 0.8894 | 0.885 | 0.8872 | 0.5 |
| No log | 1.76 | 750 | 0.0243 | 0.9078 | 0.64 | 0.7507 | 0.7000 |
| No log | 1.76 | 750 | 0.0271 | 0.8447 | 0.87 | 0.8571 | 0.4 |
| No log | 1.76 | 750 | 0.0273 | 0.9381 | 0.91 | 0.9239 | 0.2 |
| No log | 1.76 | 750 | 0.0632 | 0.7083 | 0.765 | 0.7356 | 0.3000 |
| No log | 1.76 | 750 | 0.0107 | 0.9802 | 0.99 | 0.9851 | 0.2 |
| No log | 1.76 | 750 | 0.0008 | 0.9901 | 1.0 | 0.9950 | 0.046 |
| No log | 1.76 | 750 | 0.0153 | 0.96 | 0.96 | 0.96 | 0.3000 |
| No log | 1.76 | 750 | 0.0437 | 0.7558 | 0.82 | 0.7866 | 0.2 |
| No log | 1.76 | 750 | 0.0435 | 0.7477 | 0.83 | 0.7867 | 0.4 |
| No log | 1.76 | 750 | 0.1208 | 0.5965 | 0.7234 | 0.6538 | 0.3000 |
| No log | 1.76 | 750 | 0.0332 | 0.8411 | 0.635 | 0.7236 | 0.5 |
| No log | 1.76 | 750 | 0.0122 | 0.9394 | 0.93 | 0.9347 | 0.5 |
| No log | 1.76 | 750 | 0.0245 | 0.8744 | 0.94 | 0.9060 | 0.3000 |
| No log | 1.76 | 750 | 0.0043 | 0.9949 | 0.98 | 0.9874 | 0.7000 |
| No log | 1.76 | 750 | 0.0251 | 0.8934 | 0.88 | 0.8866 | 0.4 |
| No log | 1.76 | 750 | 0.0317 | 0.6609 | 0.77 | 0.7113 | 0.4 |
| No log | 1.76 | 750 | 0.0646 | 0.73 | 0.7487 | 0.7392 | 0.4 |
| No log | 1.76 | 750 | 0.0195 | 0.9293 | 0.92 | 0.9246 | 0.4 |
| No log | 1.76 | 750 | 0.0199 | 0.8769 | 0.855 | 0.8658 | 0.6 |
| No log | 1.76 | 750 | 0.0065 | 0.9833 | 0.9833 | 0.9833 | 0.3000 |
| No log | 1.76 | 750 | 0.0117 | 0.9436 | 0.92 | 0.9316 | 0.6 |
| No log | 1.76 | 750 | 0.0315 | 0.9062 | 0.87 | 0.8878 | 0.4 |
| No log | 1.76 | 750 | 0.0063 | 0.9569 | 0.925 | 0.9407 | 0.5 |
| No log | 1.76 | 750 | 0.0160 | 0.9154 | 0.92 | 0.9177 | 0.4 |
| No log | 1.76 | 750 | 0.0672 | 0.8438 | 0.81 | 0.8265 | 0.3000 |
| No log | 1.76 | 750 | 0.0361 | 0.7914 | 0.7551 | 0.7728 | 0.5 |
| No log | 1.76 | 750 | 0.0036 | 0.9804 | 1.0 | 0.9901 | 0.3000 |
| No log | 1.76 | 750 | 0.0739 | 0.75 | 0.78 | 0.7647 | 0.4 |
| No log | 1.76 | 750 | 0.0345 | 0.4492 | 0.575 | 0.5044 | 0.2 |
| No log | 1.76 | 750 | 0.0241 | 0.8844 | 0.88 | 0.8822 | 0.2 |
| No log | 1.76 | 750 | 0.1105 | 0.6986 | 0.765 | 0.7303 | 0.4 |
| No log | 1.76 | 750 | 0.0745 | 0.6509 | 0.69 | 0.6699 | 0.6 |
| No log | 1.76 | 750 | 0.0700 | 0.6098 | 0.805 | 0.6940 | 0.077 |
| No log | 1.76 | 750 | 0.1006 | 0.7184 | 0.88 | 0.7910 | 0.084 |
| No log | 1.76 | 750 | 0.0416 | 0.7262 | 0.915 | 0.8097 | 0.3000 |
| No log | 1.76 | 750 | 0.0382 | 0.7617 | 0.895 | 0.8230 | 0.3000 |
| No log | 1.76 | 750 | 0.0455 | 0.4688 | 0.6030 | 0.5275 | 0.3000 |
| No log | 1.76 | 750 | 0.0526 | 0.7442 | 0.8 | 0.7711 | 0.3000 |
| No log | 1.76 | 750 | 0.0478 | 0.7049 | 0.7818 | 0.7414 | 0.4 |
| No log | 1.76 | 750 | 0.0412 | 0.8431 | 0.86 | 0.8515 | 0.3000 |
| No log | 1.76 | 750 | 0.0412 | 0.8431 | 0.86 | 0.8515 | 0.3000 |
| No log | 1.76 | 750 | 0.0387 | 0.7963 | 0.86 | 0.8269 | 0.3000 |
| No log | 1.76 | 750 | 0.0522 | 0.8204 | 0.845 | 0.8325 | 0.3000 |
| No log | 1.76 | 750 | 0.0335 | 0.7686 | 0.88 | 0.8205 | 0.2 |
| No log | 1.76 | 750 | 0.0387 | 0.8394 | 0.81 | 0.8244 | 0.5 |
| No log | 1.76 | 750 | 0.0848 | 0.7268 | 0.745 | 0.7358 | 0.3000 |
| No log | 1.76 | 750 | 0.0451 | 0.7119 | 0.84 | 0.7706 | 0.3000 |
| No log | 1.76 | 750 | 0.0430 | 0.8008 | 0.945 | 0.8670 | 0.2 |
| No log | 1.76 | 750 | 0.1563 | 0.6537 | 0.67 | 0.6617 | 0.2 |
| No log | 1.76 | 750 | 0.0510 | 0.8187 | 0.745 | 0.7801 | 0.5 |
| No log | 1.76 | 750 | 0.1078 | 0.6967 | 0.8543 | 0.7675 | 0.081 |
| No log | 1.76 | 750 | 0.0362 | 0.8333 | 0.875 | 0.8537 | 0.3000 |
| No log | 1.76 | 750 | 0.0362 | 0.8333 | 0.875 | 0.8537 | 0.3000 |
| No log | 1.76 | 750 | 0.0266 | 0.7115 | 0.8043 | 0.7551 | 0.4 |
| No log | 1.76 | 750 | 0.0266 | 0.7115 | 0.8043 | 0.7551 | 0.4 |
| No log | 1.76 | 750 | 0.0368 | 0.8602 | 0.8 | 0.8290 | 0.4 |
| No log | 1.76 | 750 | 0.0419 | 0.6159 | 0.465 | 0.5299 | 0.6 |
| No log | 1.76 | 750 | 0.0575 | 0.44 | 0.8462 | 0.5789 | 0.092 |
| No log | 1.76 | 750 | 0.0347 | 0.75 | 0.795 | 0.7718 | 0.5 |
| No log | 1.76 | 750 | 0.0350 | 0.5811 | 0.77 | 0.6624 | 0.3000 |
| No log | 1.76 | 750 | 0.0516 | 0.7087 | 0.815 | 0.7581 | 0.2 |
| No log | 1.76 | 750 | 0.0381 | 0.8020 | 0.79 | 0.7960 | 0.6 |
| No log | 1.76 | 750 | 0.0581 | 0.7189 | 0.895 | 0.7973 | 0.2 |
| No log | 1.76 | 750 | 0.0994 | 0.4487 | 0.3365 | 0.3846 | 0.3000 |
| No log | 1.76 | 750 | 0.0792 | 0.7078 | 0.86 | 0.7765 | 0.2 |
| No log | 1.76 | 750 | 0.0518 | 0.7604 | 0.825 | 0.7914 | 0.5 |
| No log | 1.76 | 750 | 0.0853 | 0.8021 | 0.75 | 0.7752 | 0.8 |
| No log | 1.76 | 750 | 0.1053 | 0.6865 | 0.865 | 0.7655 | 0.4 |
| No log | 1.76 | 750 | 0.0675 | 0.7040 | 0.785 | 0.7423 | 0.4 |
| No log | 1.76 | 750 | 0.1260 | 0.7845 | 0.91 | 0.8426 | 0.091 |
| No log | 1.76 | 750 | 0.1234 | 0.4711 | 0.57 | 0.5158 | 0.035 |
| No log | 1.76 | 750 | 0.0631 | 0.5297 | 0.58 | 0.5537 | 0.4 |
| No log | 1.76 | 750 | 0.0702 | 0.7901 | 0.96 | 0.8668 | 0.8 |
| No log | 1.76 | 750 | 0.0452 | 0.4925 | 0.66 | 0.5641 | 0.058 |
| No log | 1.76 | 750 | 0.0561 | 0.8009 | 0.865 | 0.8317 | 0.3000 |
| No log | 1.76 | 750 | 0.0616 | 0.6471 | 0.9167 | 0.7586 | 0.3000 |
| No log | 1.76 | 750 | 0.0469 | 0.7305 | 0.935 | 0.8202 | 0.2 |
| No log | 1.76 | 750 | 0.0403 | 0.8520 | 0.835 | 0.8434 | 0.4 |
| No log | 1.76 | 750 | 0.0628 | 0.6581 | 0.77 | 0.7097 | 0.2 |
| No log | 1.76 | 750 | 0.0482 | 0.8 | 0.8 | 0.8000 | 0.4 |
| No log | 1.76 | 750 | 0.0491 | 0.5471 | 0.61 | 0.5768 | 0.5 |
| No log | 1.76 | 750 | 0.0275 | 0.8832 | 0.945 | 0.9130 | 0.2 |
| No log | 1.76 | 750 | 0.0909 | 0.4534 | 0.535 | 0.4908 | 0.5 |
| No log | 1.76 | 750 | 0.0480 | 0.7723 | 0.8693 | 0.8180 | 0.3000 |
| No log | 1.76 | 750 | 0.1040 | 0.6024 | 0.4854 | 0.5376 | 0.3000 |
| No log | 1.76 | 750 | 0.0661 | 0.5290 | 0.73 | 0.6134 | 0.0530 |
| No log | 1.76 | 750 | 0.0369 | 0.8333 | 0.9 | 0.8654 | 0.4 |
| No log | 1.76 | 750 | 0.0623 | 0.8131 | 0.87 | 0.8406 | 0.2 |
| No log | 1.76 | 750 | 0.0623 | 0.8131 | 0.87 | 0.8406 | 0.2 |
| No log | 1.76 | 750 | 0.0432 | 0.6074 | 0.735 | 0.6652 | 0.2 |
| No log | 1.76 | 750 | 0.0624 | 0.8097 | 0.9242 | 0.8632 | 0.3000 |
| No log | 1.76 | 750 | 0.0444 | 0.6584 | 0.8040 | 0.7240 | 0.3000 |
| No log | 1.76 | 750 | 0.0525 | 0.7811 | 0.785 | 0.7830 | 0.5 |
| No log | 1.76 | 750 | 0.0460 | 0.8079 | 0.82 | 0.8139 | 0.3000 |
| No log | 1.76 | 750 | 0.0413 | 0.7742 | 0.84 | 0.8058 | 0.3000 |
| No log | 1.76 | 750 | 0.0480 | 0.6759 | 0.855 | 0.7550 | 0.5 |
| No log | 1.76 | 750 | 0.0482 | 0.7306 | 0.895 | 0.8045 | 0.2 |
| No log | 1.76 | 750 | 0.0406 | 0.8271 | 0.885 | 0.8551 | 0.3000 |
| No log | 1.76 | 750 | 0.0474 | 0.7692 | 0.75 | 0.7595 | 0.5 |
| No log | 1.76 | 750 | 0.0317 | 0.8989 | 0.8 | 0.8466 | 0.5 |
| No log | 1.76 | 750 | 0.0639 | 0.7729 | 0.8 | 0.7862 | 0.3000 |
| No log | 1.76 | 750 | 0.0465 | 0.4549 | 0.655 | 0.5369 | 0.2 |
| No log | 1.76 | 750 | 0.0562 | 0.5804 | 0.65 | 0.6132 | 0.6 |
| No log | 1.76 | 750 | 0.0519 | 0.6873 | 0.89 | 0.7756 | 0.066 |
| No log | 1.76 | 750 | 0.0605 | 0.6062 | 0.785 | 0.6841 | 0.6 |
| No log | 1.76 | 750 | 0.0591 | 0.7692 | 0.8 | 0.7843 | 0.5 |
| No log | 1.76 | 750 | 0.0371 | 0.7723 | 0.865 | 0.8160 | 0.3000 |
| No log | 1.76 | 750 | 0.0988 | 0.5036 | 0.705 | 0.5875 | 0.3000 |
| No log | 1.76 | 750 | 0.0685 | 0.7751 | 0.81 | 0.7922 | 0.4 |
| No log | 1.76 | 750 | 0.0479 | 0.6842 | 0.715 | 0.6993 | 0.5 |
| No log | 1.76 | 750 | 0.0479 | 0.6842 | 0.715 | 0.6993 | 0.5 |
| No log | 1.76 | 750 | 0.0479 | 0.6842 | 0.715 | 0.6993 | 0.5 |
| No log | 1.76 | 750 | 0.0479 | 0.6842 | 0.715 | 0.6993 | 0.5 |
| No log | 1.76 | 750 | 0.1159 | 0.5061 | 0.6313 | 0.5618 | 0.089 |
| No log | 1.76 | 750 | 0.0546 | 0.7113 | 0.8586 | 0.7780 | 0.4 |
| No log | 1.76 | 750 | 0.0185 | 0.9242 | 0.975 | 0.9489 | 0.2 |
| No log | 1.76 | 750 | 0.0019 | 0.9901 | 1.0 | 0.9950 | 0.2 |
| No log | 1.76 | 750 | 0.0026 | 1.0 | 0.995 | 0.9975 | 0.3000 |
| No log | 1.76 | 750 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.0870 |
| No log | 1.76 | 750 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.0880 |
| No log | 1.76 | 750 | 0.0003 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 1.76 | 750 | 0.0006 | 0.9950 | 1.0 | 0.9975 | 0.0370 |
| No log | 1.76 | 750 | 0.0024 | 0.9900 | 0.995 | 0.9925 | 0.7000 |
| No log | 1.76 | 750 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.4 |
| No log | 1.76 | 750 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 1.76 | 750 | 0.0184 | 0.9890 | 0.9 | 0.9424 | 0.3000 |
| No log | 1.76 | 750 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.0510 |
| No log | 1.76 | 750 | 0.0316 | 0.9011 | 0.82 | 0.8586 | 0.2 |
| No log | 1.76 | 750 | 0.0017 | 0.9901 | 1.0 | 0.9950 | 0.046 |
| No log | 1.76 | 750 | 0.0003 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 1.76 | 750 | 0.0030 | 0.995 | 0.995 | 0.995 | 0.9 |
| No log | 1.76 | 750 | 0.0051 | 0.9703 | 0.98 | 0.9751 | 0.6 |
| No log | 1.76 | 750 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 1.76 | 750 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.064 |
| No log | 1.76 | 750 | 0.0020 | 0.995 | 0.995 | 0.995 | 0.4 |
| No log | 1.76 | 750 | 0.0008 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 1.76 | 750 | 0.0134 | 0.9579 | 0.91 | 0.9333 | 0.2 |
| No log | 1.76 | 750 | 0.1140 | 0.3783 | 0.575 | 0.4563 | 0.6 |
| No log | 1.76 | 750 | 0.0968 | 0.3534 | 0.3241 | 0.3381 | 0.4 |
| No log | 1.76 | 750 | 0.1203 | 0.6667 | 0.69 | 0.6781 | 0.3000 |
| No log | 1.76 | 750 | 0.1112 | 0.5761 | 0.795 | 0.6681 | 0.3000 |
| No log | 2.34 | 1000 | 0.0411 | 0.9730 | 0.9 | 0.9351 | 0.5 |
| No log | 2.34 | 1000 | 0.0119 | 0.8942 | 0.93 | 0.9118 | 0.3000 |
| No log | 2.34 | 1000 | 0.0340 | 0.8872 | 0.865 | 0.8759 | 0.5 |
| No log | 2.34 | 1000 | 0.0162 | 0.8722 | 0.99 | 0.9274 | 0.056 |
| No log | 2.34 | 1000 | 0.0391 | 0.9479 | 0.91 | 0.9286 | 0.4 |
| No log | 2.34 | 1000 | 0.0095 | 0.9802 | 0.99 | 0.9851 | 0.7000 |
| No log | 2.34 | 1000 | 0.0145 | 0.9187 | 0.9648 | 0.9412 | 0.4 |
| No log | 2.34 | 1000 | 0.0110 | 0.9752 | 0.985 | 0.9801 | 0.3000 |
| No log | 2.34 | 1000 | 0.0118 | 0.9431 | 0.995 | 0.9684 | 0.4 |
| No log | 2.34 | 1000 | 0.0313 | 0.9568 | 0.885 | 0.9195 | 0.6 |
| No log | 2.34 | 1000 | 0.0078 | 0.9615 | 1.0 | 0.9804 | 0.3000 |
| No log | 2.34 | 1000 | 0.0135 | 0.9369 | 0.965 | 0.9507 | 0.5 |
| No log | 2.34 | 1000 | 0.0071 | 0.9569 | 1.0 | 0.9780 | 0.2 |
| No log | 2.34 | 1000 | 0.0182 | 0.9522 | 0.995 | 0.9731 | 0.7000 |
| No log | 2.34 | 1000 | 0.0142 | 0.9259 | 1.0 | 0.9615 | 0.0220 |
| No log | 2.34 | 1000 | 0.0110 | 0.9259 | 1.0 | 0.9615 | 0.068 |
| No log | 2.34 | 1000 | 0.0098 | 0.9747 | 0.9797 | 0.9772 | 0.9 |
| No log | 2.34 | 1000 | 0.0112 | 0.9648 | 0.96 | 0.9624 | 0.7000 |
| No log | 2.34 | 1000 | 0.0462 | 0.8472 | 0.9196 | 0.8819 | 0.5 |
| No log | 2.34 | 1000 | 0.0169 | 0.9259 | 1.0 | 0.9615 | 0.0190 |
| No log | 2.34 | 1000 | 0.0121 | 0.9299 | 0.995 | 0.9614 | 0.028 |
| No log | 2.34 | 1000 | 0.0485 | 0.9502 | 0.955 | 0.9526 | 0.066 |
| No log | 2.34 | 1000 | 0.0048 | 1.0 | 0.94 | 0.9691 | 0.7000 |
| No log | 2.34 | 1000 | 0.0061 | 0.9897 | 0.9797 | 0.9847 | 0.2 |
| No log | 2.34 | 1000 | 0.0187 | 0.9474 | 0.99 | 0.9682 | 0.2 |
| No log | 2.34 | 1000 | 0.0500 | 0.9444 | 0.85 | 0.8947 | 0.6 |
| No log | 2.34 | 1000 | 0.0070 | 0.9275 | 0.9648 | 0.9458 | 0.3000 |
| No log | 2.34 | 1000 | 0.0221 | 0.9151 | 0.97 | 0.9417 | 0.4 |
| No log | 2.34 | 1000 | 0.0163 | 0.9479 | 0.91 | 0.9286 | 0.6 |
| No log | 2.34 | 1000 | 0.0152 | 0.9522 | 0.995 | 0.9731 | 0.2 |
| No log | 2.34 | 1000 | 0.0317 | 0.9502 | 0.955 | 0.9526 | 0.5 |
| No log | 2.34 | 1000 | 0.0258 | 0.9469 | 0.98 | 0.9631 | 0.5 |
| No log | 2.34 | 1000 | 0.0158 | 0.9245 | 0.98 | 0.9515 | 0.083 |
| No log | 2.34 | 1000 | 0.0116 | 0.9662 | 1.0 | 0.9828 | 0.2 |
| No log | 2.34 | 1000 | 0.0111 | 0.9563 | 0.985 | 0.9704 | 0.2 |
| No log | 2.34 | 1000 | 0.0768 | 0.9101 | 0.81 | 0.8571 | 0.4 |
| No log | 2.34 | 1000 | 0.0099 | 0.9378 | 0.98 | 0.9584 | 0.2 |
| No log | 2.34 | 1000 | 0.0137 | 0.9851 | 0.99 | 0.9875 | 0.3000 |
| No log | 2.34 | 1000 | 0.1184 | 0.6931 | 0.7035 | 0.6983 | 0.061 |
| No log | 2.34 | 1000 | 0.0410 | 0.9310 | 0.9497 | 0.9403 | 0.0860 |
| No log | 2.34 | 1000 | 0.0510 | 0.8311 | 0.935 | 0.88 | 0.4 |
| No log | 2.34 | 1000 | 0.0120 | 0.9466 | 0.975 | 0.9606 | 0.067 |
| No log | 2.34 | 1000 | 0.0108 | 0.9803 | 0.995 | 0.9876 | 0.5 |
| No log | 2.34 | 1000 | 0.0056 | 0.9896 | 0.9598 | 0.9745 | 0.7000 |
| No log | 2.34 | 1000 | 0.0066 | 0.9604 | 0.97 | 0.9652 | 0.6 |
| No log | 2.34 | 1000 | 0.0039 | 0.97 | 0.97 | 0.97 | 0.8 |
| No log | 2.34 | 1000 | 0.0096 | 0.9569 | 1.0 | 0.9780 | 0.4 |
| No log | 2.34 | 1000 | 0.0322 | 0.8837 | 0.95 | 0.9157 | 0.4 |
| No log | 2.34 | 1000 | 0.0119 | 0.9852 | 1.0 | 0.9926 | 0.3000 |
| No log | 2.34 | 1000 | 0.0249 | 0.9655 | 0.98 | 0.9727 | 0.8 |
| No log | 2.34 | 1000 | 0.0150 | 0.975 | 0.975 | 0.975 | 0.8 |
| No log | 2.34 | 1000 | 0.0043 | 0.9455 | 0.9695 | 0.9574 | 0.7000 |
| No log | 2.34 | 1000 | 0.1138 | 0.6936 | 0.815 | 0.7494 | 0.054 |
| No log | 2.34 | 1000 | 0.0629 | 0.9048 | 0.855 | 0.8792 | 0.5 |
| No log | 2.34 | 1000 | 0.0107 | 0.9660 | 0.995 | 0.9803 | 0.5 |
| No log | 2.34 | 1000 | 0.0162 | 0.9524 | 1.0 | 0.9756 | 0.046 |
| No log | 2.34 | 1000 | 0.1027 | 0.9425 | 0.82 | 0.8770 | 0.4 |
| No log | 2.34 | 1000 | 0.0066 | 0.9756 | 1.0 | 0.9877 | 0.3000 |
| No log | 2.34 | 1000 | 0.1150 | 0.8763 | 0.85 | 0.8629 | 0.4 |
| No log | 2.34 | 1000 | 0.0108 | 0.9479 | 1.0 | 0.9732 | 0.5 |
| No log | 2.34 | 1000 | 0.0094 | 0.9660 | 0.995 | 0.9803 | 0.2 |
| No log | 2.34 | 1000 | 0.0090 | 0.9458 | 0.96 | 0.9529 | 0.9 |
| No log | 2.34 | 1000 | 0.0483 | 0.8733 | 0.965 | 0.9169 | 0.2 |
| No log | 2.34 | 1000 | 0.0070 | 0.9660 | 0.995 | 0.9803 | 0.3000 |
| No log | 2.34 | 1000 | 0.0121 | 0.9519 | 0.99 | 0.9706 | 0.8 |
| No log | 2.34 | 1000 | 0.0088 | 0.9431 | 0.995 | 0.9684 | 0.7000 |
| No log | 2.34 | 1000 | 0.0148 | 0.9567 | 0.995 | 0.9755 | 0.078 |
| No log | 2.34 | 1000 | 0.0081 | 0.9662 | 1.0 | 0.9828 | 0.4 |
| No log | 2.34 | 1000 | 0.0311 | 0.9072 | 0.88 | 0.8934 | 0.3000 |
| No log | 2.34 | 1000 | 0.0560 | 0.8664 | 0.94 | 0.9017 | 0.2 |
| No log | 2.34 | 1000 | 0.0094 | 0.9372 | 0.97 | 0.9533 | 0.2 |
| No log | 2.34 | 1000 | 0.0617 | 0.8615 | 0.84 | 0.8506 | 0.6 |
| No log | 2.34 | 1000 | 0.0104 | 0.9567 | 0.995 | 0.9755 | 0.3000 |
| No log | 2.34 | 1000 | 0.0153 | 0.9515 | 0.98 | 0.9655 | 0.6 |
| No log | 2.34 | 1000 | 0.0151 | 0.8676 | 0.95 | 0.9069 | 0.4 |
| No log | 2.34 | 1000 | 0.0081 | 0.9634 | 0.92 | 0.9412 | 0.4 |
| No log | 2.34 | 1000 | 0.0181 | 0.9519 | 0.99 | 0.9706 | 0.5 |
| No log | 2.34 | 1000 | 0.0139 | 0.9444 | 0.935 | 0.9397 | 0.3000 |
| No log | 2.34 | 1000 | 0.0571 | 0.9476 | 0.905 | 0.9258 | 0.2 |
| No log | 2.34 | 1000 | 0.0238 | 0.9198 | 0.86 | 0.8889 | 0.7000 |
| No log | 2.34 | 1000 | 0.0815 | 0.6917 | 0.8838 | 0.7761 | 0.003 |
| No log | 2.34 | 1000 | 0.0260 | 0.9554 | 0.965 | 0.9602 | 0.2 |
| No log | 2.34 | 1000 | 0.1174 | 0.7981 | 0.83 | 0.8137 | 0.3000 |
| No log | 2.34 | 1000 | 0.0195 | 0.9270 | 0.825 | 0.8730 | 0.7000 |
| No log | 2.34 | 1000 | 0.0673 | 0.7583 | 0.8 | 0.7786 | 0.4 |
| No log | 2.34 | 1000 | 0.0230 | 0.9072 | 0.88 | 0.8934 | 0.4 |
| No log | 2.34 | 1000 | 0.0781 | 0.8477 | 0.835 | 0.8413 | 0.4 |
| No log | 2.34 | 1000 | 0.0909 | 0.7981 | 0.85 | 0.8232 | 0.4 |
| No log | 2.34 | 1000 | 0.0610 | 0.6566 | 0.5477 | 0.5973 | 0.5 |
| No log | 2.34 | 1000 | 0.0520 | 0.8408 | 0.845 | 0.8429 | 0.5 |
| No log | 2.34 | 1000 | 0.0459 | 0.8621 | 0.875 | 0.8685 | 0.5 |
| No log | 2.34 | 1000 | 0.0935 | 0.8081 | 0.8 | 0.8040 | 0.4 |
| No log | 2.34 | 1000 | 0.0434 | 0.8303 | 0.905 | 0.8660 | 0.4 |
| No log | 2.34 | 1000 | 0.0249 | 0.89 | 0.89 | 0.89 | 0.5 |
| No log | 2.34 | 1000 | 0.0683 | 0.7814 | 0.84 | 0.8096 | 0.4 |
| No log | 2.34 | 1000 | 0.0629 | 0.8894 | 0.885 | 0.8872 | 0.6 |
| No log | 2.34 | 1000 | 0.0558 | 0.8841 | 0.915 | 0.8993 | 0.5 |
| No log | 2.34 | 1000 | 0.0471 | 0.8429 | 0.805 | 0.8235 | 0.6 |
| No log | 2.34 | 1000 | 0.0343 | 0.8770 | 0.8241 | 0.8497 | 0.6 |
| No log | 2.34 | 1000 | 0.0623 | 0.7232 | 0.81 | 0.7642 | 0.4 |
| No log | 2.34 | 1000 | 0.0754 | 0.7477 | 0.8 | 0.7729 | 0.5 |
| No log | 2.34 | 1000 | 0.0556 | 0.8311 | 0.91 | 0.8687 | 0.3000 |
| No log | 2.34 | 1000 | 0.0426 | 0.8317 | 0.865 | 0.8480 | 0.3000 |
| No log | 2.34 | 1000 | 0.1347 | 0.8579 | 0.845 | 0.8514 | 0.2 |
| No log | 2.34 | 1000 | 0.0259 | 0.8057 | 0.85 | 0.8273 | 0.4 |
| No log | 2.34 | 1000 | 0.0352 | 0.8969 | 0.8832 | 0.8900 | 0.2 |
| No log | 2.34 | 1000 | 0.0676 | 0.8634 | 0.885 | 0.8741 | 0.5 |
| No log | 2.34 | 1000 | 0.0774 | 0.7477 | 0.8040 | 0.7748 | 0.3000 |
| No log | 2.34 | 1000 | 0.0234 | 0.7478 | 0.86 | 0.8000 | 0.2 |
| No log | 2.34 | 1000 | 0.0841 | 0.7266 | 0.93 | 0.8158 | 0.3000 |
| No log | 2.34 | 1000 | 0.0331 | 0.8177 | 0.83 | 0.8238 | 0.4 |
| No log | 2.34 | 1000 | 0.0620 | 0.835 | 0.835 | 0.835 | 0.5 |
| No log | 2.34 | 1000 | 0.0700 | 0.8830 | 0.83 | 0.8557 | 0.6 |
| No log | 2.34 | 1000 | 0.1109 | 0.7773 | 0.82 | 0.7981 | 0.5 |
| No log | 2.34 | 1000 | 0.0744 | 0.7131 | 0.895 | 0.7938 | 0.097 |
| No log | 2.34 | 1000 | 0.0612 | 0.8137 | 0.83 | 0.8218 | 0.5 |
| No log | 2.34 | 1000 | 0.0507 | 0.8018 | 0.8945 | 0.8456 | 0.4 |
| No log | 2.34 | 1000 | 0.1478 | 0.6885 | 0.84 | 0.7568 | 0.098 |
| No log | 2.34 | 1000 | 0.0761 | 0.5574 | 0.51 | 0.5326 | 0.4 |
| No log | 2.34 | 1000 | 0.0926 | 0.9274 | 0.83 | 0.8760 | 0.5 |
| No log | 2.34 | 1000 | 0.2438 | 0.3158 | 0.7839 | 0.4502 | 0.001 |
| No log | 2.34 | 1000 | 0.0760 | 0.8944 | 0.8090 | 0.8496 | 0.4 |
| No log | 2.34 | 1000 | 0.0865 | 0.6900 | 0.935 | 0.7941 | 0.2 |
| No log | 2.34 | 1000 | 0.0248 | 0.9275 | 0.9040 | 0.9156 | 0.2 |
| No log | 2.34 | 1000 | 0.0621 | 0.8832 | 0.945 | 0.9130 | 0.3000 |
| No log | 2.34 | 1000 | 0.0270 | 0.8585 | 0.8844 | 0.8713 | 0.3000 |
| No log | 2.34 | 1000 | 0.0352 | 0.8743 | 0.765 | 0.8160 | 0.7000 |
| No log | 2.34 | 1000 | 0.0413 | 0.7622 | 0.705 | 0.7325 | 0.5 |
| No log | 2.34 | 1000 | 0.0624 | 0.8408 | 0.845 | 0.8429 | 0.6 |
| No log | 2.34 | 1000 | 0.0608 | 0.6412 | 0.84 | 0.7273 | 0.3000 |
| No log | 2.34 | 1000 | 0.0594 | 0.8981 | 0.97 | 0.9327 | 0.2 |
| No log | 2.34 | 1000 | 0.0529 | 0.8488 | 0.87 | 0.8593 | 0.5 |
| No log | 2.34 | 1000 | 0.1257 | 0.7017 | 0.835 | 0.7626 | 0.4 |
| No log | 2.34 | 1000 | 0.0196 | 0.8820 | 0.7136 | 0.7889 | 0.8 |
| No log | 2.34 | 1000 | 0.1820 | 0.5320 | 0.7940 | 0.6371 | 0.017 |
| No log | 2.34 | 1000 | 0.0939 | 0.8763 | 0.85 | 0.8629 | 0.5 |
| No log | 2.34 | 1000 | 0.0514 | 0.8684 | 0.825 | 0.8462 | 0.7000 |
| No log | 2.34 | 1000 | 0.0613 | 0.8738 | 0.9 | 0.8867 | 0.5 |
| No log | 2.34 | 1000 | 0.0986 | 0.8729 | 0.79 | 0.8294 | 0.3000 |
| No log | 2.34 | 1000 | 0.0565 | 0.7939 | 0.905 | 0.8458 | 0.3000 |
| No log | 2.34 | 1000 | 0.1316 | 0.7121 | 0.94 | 0.8103 | 0.083 |
| No log | 2.34 | 1000 | 0.0383 | 0.7991 | 0.935 | 0.8618 | 0.4 |
| No log | 2.34 | 1000 | 0.0592 | 0.8763 | 0.85 | 0.8629 | 0.5 |
| No log | 2.34 | 1000 | 0.0166 | 0.8440 | 0.92 | 0.8804 | 0.4 |
| No log | 2.34 | 1000 | 0.0976 | 0.7258 | 0.9 | 0.8036 | 0.3000 |
| No log | 2.34 | 1000 | 0.0367 | 0.8808 | 0.85 | 0.8651 | 0.7000 |
| No log | 2.34 | 1000 | 0.0562 | 0.8522 | 0.865 | 0.8586 | 0.5 |
| No log | 2.34 | 1000 | 0.0367 | 0.8967 | 0.825 | 0.8594 | 0.6 |
| No log | 2.34 | 1000 | 0.0897 | 0.7962 | 0.84 | 0.8175 | 0.4 |
| No log | 2.34 | 1000 | 0.0377 | 0.8995 | 0.895 | 0.8972 | 0.6 |
| No log | 2.34 | 1000 | 0.0714 | 0.7193 | 0.615 | 0.6631 | 0.3000 |
| No log | 2.34 | 1000 | 0.1205 | 0.7513 | 0.74 | 0.7456 | 0.4 |
| No log | 2.34 | 1000 | 0.0276 | 0.8643 | 0.86 | 0.8622 | 0.3000 |
| No log | 2.34 | 1000 | 0.0694 | 0.8359 | 0.815 | 0.8253 | 0.6 |
| No log | 2.34 | 1000 | 0.0895 | 0.8075 | 0.86 | 0.8329 | 0.4 |
| No log | 2.34 | 1000 | 0.0513 | 0.8325 | 0.87 | 0.8509 | 0.6 |
| No log | 2.34 | 1000 | 0.0238 | 0.8287 | 0.895 | 0.8606 | 0.4 |
| No log | 2.34 | 1000 | 0.0171 | 0.8261 | 0.95 | 0.8837 | 0.089 |
| No log | 2.34 | 1000 | 0.1072 | 0.7020 | 0.895 | 0.7868 | 0.3000 |
| No log | 2.34 | 1000 | 0.0608 | 0.612 | 0.765 | 0.6800 | 0.2 |
| No log | 2.34 | 1000 | 0.0859 | 0.8384 | 0.96 | 0.8951 | 0.031 |
| No log | 2.34 | 1000 | 0.1109 | 0.4394 | 0.78 | 0.5622 | 0.0360 |
| No log | 2.34 | 1000 | 0.0562 | 0.7668 | 0.8636 | 0.8124 | 0.082 |
| No log | 2.34 | 1000 | 0.0633 | 0.8634 | 0.885 | 0.8741 | 0.2 |
| No log | 2.34 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.002 |
| No log | 2.34 | 1000 | 0.0096 | 0.8111 | 0.8934 | 0.8502 | 0.7000 |
| No log | 2.34 | 1000 | 0.0042 | 0.9206 | 0.985 | 0.9517 | 0.4 |
| No log | 2.34 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.008 |
| No log | 2.34 | 1000 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.035 |
| No log | 2.34 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.032 |
| No log | 2.34 | 1000 | 0.0023 | 0.9947 | 1.0 | 0.9973 | 0.021 |
| No log | 2.34 | 1000 | 0.0012 | 1.0 | 0.995 | 0.9975 | 0.4 |
| No log | 2.34 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 2.34 | 1000 | 0.0033 | 0.9900 | 0.995 | 0.9925 | 0.9 |
| No log | 2.34 | 1000 | 0.0007 | 1.0 | 1.0 | 1.0 | 0.4 |
| No log | 2.34 | 1000 | 0.0085 | 0.9897 | 0.965 | 0.9772 | 0.0730 |
| No log | 2.34 | 1000 | 0.0149 | 0.9946 | 0.925 | 0.9585 | 0.2 |
| No log | 2.34 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 2.34 | 1000 | 0.0156 | 0.9643 | 0.945 | 0.9545 | 0.2 |
| No log | 2.34 | 1000 | 0.0008 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 2.34 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.003 |
| No log | 2.34 | 1000 | 0.0030 | 0.9949 | 0.985 | 0.9899 | 0.3000 |
| No log | 2.34 | 1000 | 0.0010 | 0.995 | 0.995 | 0.995 | 0.4 |
| No log | 2.34 | 1000 | 0.0031 | 0.98 | 0.98 | 0.98 | 0.8 |
| No log | 2.34 | 1000 | 0.0353 | 0.9274 | 0.83 | 0.8760 | 0.7000 |
| No log | 2.34 | 1000 | 0.0006 | 0.9950 | 1.0 | 0.9975 | 0.031 |
| No log | 2.34 | 1000 | 0.0012 | 0.9852 | 1.0 | 0.9926 | 0.032 |
| No log | 2.34 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.011 |
| No log | 2.34 | 1000 | 0.0005 | 1.0 | 0.995 | 0.9975 | 0.7000 |
| No log | 2.34 | 1000 | 0.0012 | 0.995 | 0.995 | 0.995 | 0.4 |
| No log | 2.34 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.089 |
| No log | 2.34 | 1000 | 0.0052 | 0.9289 | 0.98 | 0.9538 | 0.4 |
| No log | 2.34 | 1000 | 0.0019 | 0.9901 | 1.0 | 0.9950 | 0.045 |
| No log | 2.34 | 1000 | 0.0176 | 0.9845 | 0.955 | 0.9695 | 0.038 |
| No log | 2.34 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.4 |
| No log | 2.34 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.002 |
| No log | 2.34 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.004 |
| No log | 2.34 | 1000 | 0.0006 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 2.34 | 1000 | 0.0015 | 0.9901 | 1.0 | 0.9950 | 0.2 |
| No log | 2.34 | 1000 | 0.0005 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 2.34 | 1000 | 0.0033 | 0.9792 | 1.0 | 0.9895 | 0.02 |
| No log | 2.34 | 1000 | 0.0040 | 0.9463 | 0.97 | 0.9580 | 0.2 |
| No log | 2.34 | 1000 | 0.0041 | 0.9804 | 1.0 | 0.9901 | 0.5 |
| No log | 2.34 | 1000 | 0.0011 | 1.0 | 1.0 | 1.0 | 0.7000 |
| No log | 2.34 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 2.34 | 1000 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 2.34 | 1000 | 0.0015 | 0.9900 | 0.995 | 0.9925 | 0.0860 |
| No log | 2.34 | 1000 | 0.0173 | 0.9108 | 0.9798 | 0.9440 | 0.4 |
| No log | 2.34 | 1000 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.8 |
| No log | 2.34 | 1000 | 0.0022 | 0.99 | 0.99 | 0.99 | 0.7000 |
| No log | 2.34 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.004 |
| No log | 2.34 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 2.34 | 1000 | 0.0104 | 0.9524 | 1.0 | 0.9756 | 0.058 |
| No log | 2.34 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.5 |
| No log | 2.34 | 1000 | 0.0162 | 0.9381 | 0.91 | 0.9239 | 0.4 |
| No log | 2.34 | 1000 | 0.0026 | 0.9950 | 1.0 | 0.9975 | 0.003 |
| No log | 2.34 | 1000 | 0.0010 | 0.995 | 0.995 | 0.995 | 0.8 |
| No log | 2.34 | 1000 | 0.0010 | 0.9901 | 1.0 | 0.9950 | 0.2 |
| No log | 2.34 | 1000 | 0.0008 | 0.9950 | 1.0 | 0.9975 | 0.4 |
| No log | 2.34 | 1000 | 0.0112 | 0.7934 | 0.96 | 0.8688 | 0.049 |
| No log | 2.34 | 1000 | 0.0083 | 0.9950 | 0.99 | 0.9925 | 0.002 |
| No log | 2.34 | 1000 | 0.0076 | 0.98 | 0.98 | 0.98 | 0.2 |
| No log | 2.34 | 1000 | 0.0169 | 0.7119 | 0.8528 | 0.7760 | 0.6 |
| No log | 2.34 | 1000 | 0.0068 | 0.9139 | 0.955 | 0.9340 | 0.3000 |
| No log | 2.34 | 1000 | 0.0378 | 0.8990 | 0.89 | 0.8945 | 0.5 |
| No log | 2.34 | 1000 | 0.1370 | 0.6301 | 0.8214 | 0.7132 | 0.05 |
| No log | 2.34 | 1000 | 0.0056 | 0.9594 | 0.945 | 0.9521 | 0.7000 |
| No log | 2.34 | 1000 | 0.0923 | 0.8030 | 0.8670 | 0.8338 | 0.3000 |
| No log | 2.34 | 1000 | 0.0283 | 0.8565 | 0.955 | 0.9031 | 0.2 |
| No log | 2.34 | 1000 | 0.0281 | 0.8838 | 0.875 | 0.8794 | 0.4 |
| No log | 2.34 | 1000 | 0.0279 | 0.8786 | 0.905 | 0.8916 | 0.3000 |
| No log | 2.34 | 1000 | 0.0259 | 0.8761 | 0.955 | 0.9139 | 0.4 |
| No log | 2.34 | 1000 | 0.0238 | 0.9355 | 0.87 | 0.9016 | 0.5 |
| No log | 2.34 | 1000 | 0.0329 | 0.8317 | 0.865 | 0.8480 | 0.3000 |
| No log | 2.34 | 1000 | 0.0187 | 0.8233 | 0.885 | 0.8530 | 0.4 |
| No log | 2.34 | 1000 | 0.0405 | 0.8483 | 0.755 | 0.7989 | 0.6 |
| No log | 2.34 | 1000 | 0.0496 | 0.8495 | 0.875 | 0.8621 | 0.3000 |
| No log | 2.34 | 1000 | 0.0021 | 0.9950 | 1.0 | 0.9975 | 0.081 |
| No log | 2.34 | 1000 | 0.0317 | 0.8852 | 0.81 | 0.8460 | 0.7000 |
| No log | 2.34 | 1000 | 0.0276 | 0.7973 | 0.885 | 0.8389 | 0.2 |
| No log | 2.34 | 1000 | 0.0277 | 0.8674 | 0.7929 | 0.8285 | 0.5 |
| No log | 2.34 | 1000 | 0.0630 | 0.6751 | 0.8040 | 0.7339 | 0.3000 |
| No log | 2.34 | 1000 | 0.0246 | 0.8213 | 0.85 | 0.8354 | 0.3000 |
| No log | 2.34 | 1000 | 0.0176 | 0.9086 | 0.895 | 0.9018 | 0.4 |
| No log | 2.34 | 1000 | 0.0065 | 0.9592 | 0.94 | 0.9495 | 0.5 |
| No log | 2.34 | 1000 | 0.0311 | 0.8342 | 0.805 | 0.8193 | 0.6 |
| No log | 2.34 | 1000 | 0.0336 | 0.7902 | 0.885 | 0.8349 | 0.2 |
| No log | 2.34 | 1000 | 0.0280 | 0.8861 | 0.895 | 0.8905 | 0.5 |
| No log | 2.34 | 1000 | 0.0248 | 0.7339 | 0.8 | 0.7656 | 0.4 |
| No log | 2.34 | 1000 | 0.0281 | 0.8488 | 0.87 | 0.8593 | 0.4 |
| No log | 2.34 | 1000 | 0.0300 | 0.9722 | 0.875 | 0.9211 | 0.4 |
| No log | 2.34 | 1000 | 0.0658 | 0.8 | 0.68 | 0.7351 | 0.5 |
| No log | 2.34 | 1000 | 0.0119 | 0.985 | 0.985 | 0.985 | 0.2 |
| No log | 2.34 | 1000 | 0.0011 | 0.9901 | 1.0 | 0.9950 | 0.0260 |
| No log | 2.34 | 1000 | 0.0141 | 0.965 | 0.965 | 0.965 | 0.4 |
| No log | 2.34 | 1000 | 0.0430 | 0.7990 | 0.795 | 0.7970 | 0.3000 |
| No log | 2.34 | 1000 | 0.0457 | 0.7432 | 0.825 | 0.7820 | 0.4 |
| No log | 2.34 | 1000 | 0.1199 | 0.6154 | 0.6809 | 0.6465 | 0.4 |
| No log | 2.34 | 1000 | 0.0323 | 0.7965 | 0.685 | 0.7366 | 0.4 |
| No log | 2.34 | 1000 | 0.0131 | 0.9397 | 0.935 | 0.9373 | 0.5 |
| No log | 2.34 | 1000 | 0.0237 | 0.9118 | 0.93 | 0.9208 | 0.4 |
| No log | 2.34 | 1000 | 0.0046 | 0.9851 | 0.99 | 0.9875 | 0.5 |
| No log | 2.34 | 1000 | 0.0277 | 0.8626 | 0.91 | 0.8856 | 0.2 |
| No log | 2.34 | 1000 | 0.0314 | 0.7188 | 0.69 | 0.7041 | 0.6 |
| No log | 2.34 | 1000 | 0.0664 | 0.7028 | 0.7641 | 0.7322 | 0.4 |
| No log | 2.34 | 1000 | 0.0202 | 0.9531 | 0.915 | 0.9337 | 0.5 |
| No log | 2.34 | 1000 | 0.0202 | 0.8517 | 0.89 | 0.8704 | 0.5 |
| No log | 2.34 | 1000 | 0.0034 | 0.9836 | 1.0 | 0.9917 | 0.3000 |
| No log | 2.34 | 1000 | 0.0117 | 0.9220 | 0.945 | 0.9333 | 0.4 |
| No log | 2.34 | 1000 | 0.0292 | 0.9223 | 0.89 | 0.9059 | 0.5 |
| No log | 2.34 | 1000 | 0.0056 | 0.95 | 0.95 | 0.9500 | 0.4 |
| No log | 2.34 | 1000 | 0.0170 | 0.9231 | 0.9 | 0.9114 | 0.4 |
| No log | 2.34 | 1000 | 0.0691 | 0.8534 | 0.815 | 0.8338 | 0.3000 |
| No log | 2.34 | 1000 | 0.0363 | 0.7358 | 0.7959 | 0.7647 | 0.4 |
| No log | 2.34 | 1000 | 0.0034 | 0.9852 | 1.0 | 0.9926 | 0.3000 |
| No log | 2.34 | 1000 | 0.0774 | 0.7477 | 0.8 | 0.7729 | 0.4 |
| No log | 2.34 | 1000 | 0.0358 | 0.6071 | 0.425 | 0.5 | 0.3000 |
| No log | 2.34 | 1000 | 0.0281 | 0.9239 | 0.85 | 0.8854 | 0.2 |
| No log | 2.34 | 1000 | 0.1175 | 0.7048 | 0.74 | 0.7220 | 0.5 |
| No log | 2.34 | 1000 | 0.0831 | 0.5694 | 0.82 | 0.6721 | 0.2 |
| No log | 2.34 | 1000 | 0.0631 | 0.6346 | 0.825 | 0.7174 | 0.092 |
| No log | 2.34 | 1000 | 0.1007 | 0.7137 | 0.91 | 0.8 | 0.054 |
| No log | 2.34 | 1000 | 0.0432 | 0.7309 | 0.91 | 0.8107 | 0.3000 |
| No log | 2.34 | 1000 | 0.0389 | 0.7900 | 0.865 | 0.8258 | 0.4 |
| No log | 2.34 | 1000 | 0.0453 | 0.5302 | 0.5729 | 0.5507 | 0.3000 |
| No log | 2.34 | 1000 | 0.0521 | 0.7969 | 0.765 | 0.7806 | 0.4 |
| No log | 2.34 | 1000 | 0.0532 | 0.6667 | 0.8364 | 0.7419 | 0.4 |
| No log | 2.34 | 1000 | 0.0410 | 0.8373 | 0.875 | 0.8557 | 0.3000 |
| No log | 2.34 | 1000 | 0.0410 | 0.8373 | 0.875 | 0.8557 | 0.3000 |
| No log | 2.34 | 1000 | 0.0397 | 0.7944 | 0.85 | 0.8213 | 0.3000 |
| No log | 2.34 | 1000 | 0.0509 | 0.7939 | 0.905 | 0.8458 | 0.2 |
| No log | 2.34 | 1000 | 0.0346 | 0.85 | 0.765 | 0.8053 | 0.4 |
| No log | 2.34 | 1000 | 0.0393 | 0.8241 | 0.82 | 0.8221 | 0.5 |
| No log | 2.34 | 1000 | 0.0865 | 0.6851 | 0.805 | 0.7402 | 0.2 |
| No log | 2.34 | 1000 | 0.0472 | 0.7453 | 0.79 | 0.7670 | 0.4 |
| No log | 2.34 | 1000 | 0.0429 | 0.8087 | 0.93 | 0.8651 | 0.2 |
| No log | 2.34 | 1000 | 0.1605 | 0.6634 | 0.68 | 0.6716 | 0.2 |
| No log | 2.34 | 1000 | 0.0520 | 0.7536 | 0.795 | 0.7737 | 0.4 |
| No log | 2.34 | 1000 | 0.1162 | 0.6987 | 0.8392 | 0.7626 | 0.0720 |
| No log | 2.34 | 1000 | 0.0347 | 0.8318 | 0.89 | 0.8599 | 0.3000 |
| No log | 2.34 | 1000 | 0.0347 | 0.8318 | 0.89 | 0.8599 | 0.3000 |
| No log | 2.34 | 1000 | 0.0278 | 0.7857 | 0.7174 | 0.75 | 0.5 |
| No log | 2.34 | 1000 | 0.0278 | 0.7857 | 0.7174 | 0.75 | 0.5 |
| No log | 2.34 | 1000 | 0.0384 | 0.8474 | 0.805 | 0.8256 | 0.4 |
| No log | 2.34 | 1000 | 0.0435 | 0.5181 | 0.5 | 0.5089 | 0.4 |
| No log | 2.34 | 1000 | 0.0522 | 0.5238 | 0.8462 | 0.6471 | 0.091 |
| No log | 2.34 | 1000 | 0.0333 | 0.8232 | 0.745 | 0.7822 | 0.6 |
| No log | 2.34 | 1000 | 0.0367 | 0.6017 | 0.71 | 0.6514 | 0.4 |
| No log | 2.34 | 1000 | 0.0530 | 0.6946 | 0.83 | 0.7563 | 0.2 |
| No log | 2.34 | 1000 | 0.0396 | 0.8343 | 0.755 | 0.7927 | 0.7000 |
| No log | 2.34 | 1000 | 0.0600 | 0.7348 | 0.845 | 0.7860 | 0.3000 |
| No log | 2.34 | 1000 | 0.1126 | 0.5 | 0.3173 | 0.3882 | 0.3000 |
| No log | 2.34 | 1000 | 0.0775 | 0.7523 | 0.82 | 0.7847 | 0.3000 |
| No log | 2.34 | 1000 | 0.0523 | 0.7703 | 0.855 | 0.8104 | 0.5 |
| No log | 2.34 | 1000 | 0.0972 | 0.7876 | 0.76 | 0.7735 | 0.8 |
| No log | 2.34 | 1000 | 0.1203 | 0.6971 | 0.84 | 0.7619 | 0.6 |
| No log | 2.34 | 1000 | 0.0707 | 0.6381 | 0.855 | 0.7308 | 0.2 |
| No log | 2.34 | 1000 | 0.1316 | 0.8054 | 0.89 | 0.8456 | 0.099 |
| No log | 2.34 | 1000 | 0.1522 | 0.4435 | 0.55 | 0.4911 | 0.015 |
| No log | 2.34 | 1000 | 0.0669 | 0.5134 | 0.575 | 0.5425 | 0.3000 |
| No log | 2.34 | 1000 | 0.0756 | 0.7846 | 0.965 | 0.8655 | 0.6 |
| No log | 2.34 | 1000 | 0.0534 | 0.4922 | 0.63 | 0.5526 | 0.0300 |
| No log | 2.34 | 1000 | 0.0616 | 0.7788 | 0.88 | 0.8263 | 0.2 |
| No log | 2.34 | 1000 | 0.0580 | 0.8889 | 0.6667 | 0.7619 | 0.5 |
| No log | 2.34 | 1000 | 0.0486 | 0.7287 | 0.9 | 0.8054 | 0.2 |
| No log | 2.34 | 1000 | 0.0402 | 0.8447 | 0.87 | 0.8571 | 0.4 |
| No log | 2.34 | 1000 | 0.0664 | 0.6916 | 0.74 | 0.7150 | 0.2 |
| No log | 2.34 | 1000 | 0.0490 | 0.7840 | 0.835 | 0.8087 | 0.4 |
| No log | 2.34 | 1000 | 0.0485 | 0.5076 | 0.665 | 0.5758 | 0.3000 |
| No log | 2.34 | 1000 | 0.0289 | 0.8739 | 0.97 | 0.9194 | 0.0880 |
| No log | 2.34 | 1000 | 0.0954 | 0.4286 | 0.6 | 0.5 | 0.4 |
| No log | 2.34 | 1000 | 0.0526 | 0.8020 | 0.8141 | 0.8080 | 0.5 |
| No log | 2.34 | 1000 | 0.1072 | 0.5976 | 0.4757 | 0.5297 | 0.3000 |
| No log | 2.34 | 1000 | 0.0871 | 0.5181 | 0.645 | 0.5746 | 0.035 |
| No log | 2.34 | 1000 | 0.0400 | 0.8524 | 0.895 | 0.8732 | 0.5 |
| No log | 2.34 | 1000 | 0.0618 | 0.7724 | 0.95 | 0.8520 | 0.07 |
| No log | 2.34 | 1000 | 0.0618 | 0.7724 | 0.95 | 0.8520 | 0.07 |
| No log | 2.34 | 1000 | 0.0461 | 0.6435 | 0.695 | 0.6683 | 0.2 |
| No log | 2.34 | 1000 | 0.0622 | 0.8203 | 0.8990 | 0.8578 | 0.4 |
| No log | 2.34 | 1000 | 0.0463 | 0.6721 | 0.8241 | 0.7404 | 0.3000 |
| No log | 2.34 | 1000 | 0.0532 | 0.7038 | 0.915 | 0.7957 | 0.2 |
| No log | 2.34 | 1000 | 0.0472 | 0.7870 | 0.85 | 0.8173 | 0.2 |
| No log | 2.34 | 1000 | 0.0422 | 0.7636 | 0.84 | 0.8000 | 0.3000 |
| No log | 2.34 | 1000 | 0.0516 | 0.7064 | 0.83 | 0.7632 | 0.6 |
| No log | 2.34 | 1000 | 0.0513 | 0.7661 | 0.835 | 0.7990 | 0.3000 |
| No log | 2.34 | 1000 | 0.0401 | 0.8636 | 0.855 | 0.8593 | 0.4 |
| No log | 2.34 | 1000 | 0.0501 | 0.7536 | 0.78 | 0.7666 | 0.5 |
| No log | 2.34 | 1000 | 0.0321 | 0.8846 | 0.805 | 0.8429 | 0.5 |
| No log | 2.34 | 1000 | 0.0655 | 0.7277 | 0.855 | 0.7862 | 0.2 |
| No log | 2.34 | 1000 | 0.0532 | 0.4387 | 0.68 | 0.5333 | 0.074 |
| No log | 2.34 | 1000 | 0.0596 | 0.5510 | 0.675 | 0.6067 | 0.5 |
| No log | 2.34 | 1000 | 0.0501 | 0.7319 | 0.86 | 0.7908 | 0.0880 |
| No log | 2.34 | 1000 | 0.0648 | 0.6622 | 0.745 | 0.7012 | 0.7000 |
| No log | 2.34 | 1000 | 0.0582 | 0.7658 | 0.85 | 0.8057 | 0.4 |
| No log | 2.34 | 1000 | 0.0396 | 0.7980 | 0.81 | 0.8040 | 0.4 |
| No log | 2.34 | 1000 | 0.1084 | 0.5018 | 0.705 | 0.5863 | 0.3000 |
| No log | 2.34 | 1000 | 0.0701 | 0.7895 | 0.825 | 0.8068 | 0.4 |
| No log | 2.34 | 1000 | 0.0474 | 0.6466 | 0.805 | 0.7171 | 0.4 |
| No log | 2.34 | 1000 | 0.0474 | 0.6466 | 0.805 | 0.7171 | 0.4 |
| No log | 2.34 | 1000 | 0.0474 | 0.6466 | 0.805 | 0.7171 | 0.4 |
| No log | 2.34 | 1000 | 0.0474 | 0.6466 | 0.805 | 0.7171 | 0.4 |
| No log | 2.34 | 1000 | 0.1290 | 0.5256 | 0.5707 | 0.5472 | 0.1 |
| No log | 2.34 | 1000 | 0.0608 | 0.7523 | 0.8131 | 0.7816 | 0.6 |
| No log | 2.34 | 1000 | 0.0189 | 0.9282 | 0.97 | 0.9487 | 0.2 |
| No log | 2.34 | 1000 | 0.0021 | 0.9901 | 1.0 | 0.9950 | 0.2 |
| No log | 2.34 | 1000 | 0.0027 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 2.34 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 2.34 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.097 |
| No log | 2.34 | 1000 | 0.0003 | 0.9950 | 1.0 | 0.9975 | 0.3000 |
| No log | 2.34 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.056 |
| No log | 2.34 | 1000 | 0.0026 | 0.9803 | 0.995 | 0.9876 | 0.6 |
| No log | 2.34 | 1000 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 2.34 | 1000 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 2.34 | 1000 | 0.0198 | 0.9890 | 0.9 | 0.9424 | 0.4 |
| No log | 2.34 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.056 |
| No log | 2.34 | 1000 | 0.0371 | 0.9195 | 0.8 | 0.8556 | 0.2 |
| No log | 2.34 | 1000 | 0.0018 | 0.9901 | 1.0 | 0.9950 | 0.024 |
| No log | 2.34 | 1000 | 0.0003 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 2.34 | 1000 | 0.0032 | 0.9851 | 0.995 | 0.9900 | 0.9 |
| No log | 2.34 | 1000 | 0.0051 | 0.9701 | 0.975 | 0.9726 | 0.6 |
| No log | 2.34 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.021 |
| No log | 2.34 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.058 |
| No log | 2.34 | 1000 | 0.0021 | 0.995 | 0.995 | 0.995 | 0.4 |
| No log | 2.34 | 1000 | 0.0009 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 2.34 | 1000 | 0.0170 | 0.9340 | 0.92 | 0.9270 | 0.081 |
| No log | 2.34 | 1000 | 0.1219 | 0.4164 | 0.56 | 0.4776 | 0.7000 |
| No log | 2.34 | 1000 | 0.0952 | 0.3952 | 0.3379 | 0.3643 | 0.4 |
| No log | 2.34 | 1000 | 0.1251 | 0.675 | 0.675 | 0.675 | 0.3000 |
| No log | 2.34 | 1000 | 0.1113 | 0.6292 | 0.755 | 0.6864 | 0.4 |
| No log | 2.93 | 1250 | 0.0419 | 0.9630 | 0.91 | 0.9357 | 0.5 |
| No log | 2.93 | 1250 | 0.0121 | 0.9289 | 0.915 | 0.9219 | 0.5 |
| No log | 2.93 | 1250 | 0.0339 | 0.8660 | 0.905 | 0.8851 | 0.4 |
| No log | 2.93 | 1250 | 0.0163 | 0.8945 | 0.975 | 0.9330 | 0.093 |
| No log | 2.93 | 1250 | 0.0396 | 0.9340 | 0.92 | 0.9270 | 0.2 |
| No log | 2.93 | 1250 | 0.0100 | 0.9802 | 0.99 | 0.9851 | 0.7000 |
| No log | 2.93 | 1250 | 0.0143 | 0.9320 | 0.9648 | 0.9481 | 0.5 |
| No log | 2.93 | 1250 | 0.0105 | 0.9706 | 0.99 | 0.9802 | 0.4 |
| No log | 2.93 | 1250 | 0.0121 | 0.9434 | 1.0 | 0.9709 | 0.5 |
| No log | 2.93 | 1250 | 0.0309 | 0.9672 | 0.885 | 0.9243 | 0.7000 |
| No log | 2.93 | 1250 | 0.0077 | 0.9660 | 0.995 | 0.9803 | 0.4 |
| No log | 2.93 | 1250 | 0.0124 | 0.9463 | 0.97 | 0.9580 | 0.5 |
| No log | 2.93 | 1250 | 0.0073 | 0.9569 | 1.0 | 0.9780 | 0.2 |
| No log | 2.93 | 1250 | 0.0173 | 0.9522 | 0.995 | 0.9731 | 0.6 |
| No log | 2.93 | 1250 | 0.0136 | 0.9343 | 0.995 | 0.9637 | 0.2 |
| No log | 2.93 | 1250 | 0.0105 | 0.9259 | 1.0 | 0.9615 | 0.068 |
| No log | 2.93 | 1250 | 0.0096 | 0.9608 | 0.9949 | 0.9776 | 0.7000 |
| No log | 2.93 | 1250 | 0.0111 | 0.965 | 0.965 | 0.965 | 0.7000 |
| No log | 2.93 | 1250 | 0.0467 | 0.8732 | 0.8995 | 0.8861 | 0.6 |
| No log | 2.93 | 1250 | 0.0166 | 0.9259 | 1.0 | 0.9615 | 0.0300 |
| No log | 2.93 | 1250 | 0.0117 | 0.9343 | 0.995 | 0.9637 | 0.0370 |
| No log | 2.93 | 1250 | 0.0485 | 0.9458 | 0.96 | 0.9529 | 0.065 |
| No log | 2.93 | 1250 | 0.0044 | 0.9947 | 0.945 | 0.9692 | 0.6 |
| No log | 2.93 | 1250 | 0.0055 | 0.9949 | 0.9848 | 0.9898 | 0.2 |
| No log | 2.93 | 1250 | 0.0187 | 0.9474 | 0.99 | 0.9682 | 0.2 |
| No log | 2.93 | 1250 | 0.0500 | 0.9251 | 0.865 | 0.8941 | 0.5 |
| No log | 2.93 | 1250 | 0.0064 | 0.9275 | 0.9648 | 0.9458 | 0.3000 |
| No log | 2.93 | 1250 | 0.0216 | 0.9116 | 0.98 | 0.9446 | 0.3000 |
| No log | 2.93 | 1250 | 0.0163 | 0.9187 | 0.96 | 0.9389 | 0.4 |
| No log | 2.93 | 1250 | 0.0152 | 0.9476 | 0.995 | 0.9707 | 0.039 |
| No log | 2.93 | 1250 | 0.0307 | 0.9461 | 0.965 | 0.9554 | 0.2 |
| No log | 2.93 | 1250 | 0.0253 | 0.9557 | 0.97 | 0.9628 | 0.7000 |
| No log | 2.93 | 1250 | 0.0146 | 0.9336 | 0.985 | 0.9586 | 0.079 |
| No log | 2.93 | 1250 | 0.0120 | 0.9662 | 1.0 | 0.9828 | 0.2 |
| No log | 2.93 | 1250 | 0.0108 | 0.9519 | 0.99 | 0.9706 | 0.081 |
| No log | 2.93 | 1250 | 0.0805 | 0.8691 | 0.83 | 0.8491 | 0.4 |
| No log | 2.93 | 1250 | 0.0097 | 0.9378 | 0.98 | 0.9584 | 0.2 |
| No log | 2.93 | 1250 | 0.0146 | 0.9899 | 0.985 | 0.9875 | 0.3000 |
| No log | 2.93 | 1250 | 0.1217 | 0.7041 | 0.6935 | 0.6987 | 0.067 |
| No log | 2.93 | 1250 | 0.0422 | 0.9492 | 0.9397 | 0.9444 | 0.092 |
| No log | 2.93 | 1250 | 0.0494 | 0.8282 | 0.94 | 0.8806 | 0.3000 |
| No log | 2.93 | 1250 | 0.0115 | 0.9692 | 0.945 | 0.9570 | 0.2 |
| No log | 2.93 | 1250 | 0.0111 | 0.9756 | 1.0 | 0.9877 | 0.2 |
| No log | 2.93 | 1250 | 0.0058 | 0.9896 | 0.9598 | 0.9745 | 0.8 |
| No log | 2.93 | 1250 | 0.0065 | 0.9561 | 0.98 | 0.9679 | 0.6 |
| No log | 2.93 | 1250 | 0.0038 | 0.97 | 0.97 | 0.97 | 0.8 |
| No log | 2.93 | 1250 | 0.0094 | 0.9569 | 1.0 | 0.9780 | 0.4 |
| No log | 2.93 | 1250 | 0.0317 | 0.8957 | 0.945 | 0.9197 | 0.5 |
| No log | 2.93 | 1250 | 0.0123 | 0.9804 | 1.0 | 0.9901 | 0.2 |
| No log | 2.93 | 1250 | 0.0247 | 0.9703 | 0.98 | 0.9751 | 0.7000 |
| No log | 2.93 | 1250 | 0.0155 | 0.9799 | 0.975 | 0.9774 | 0.8 |
| No log | 2.93 | 1250 | 0.0046 | 0.9242 | 0.9898 | 0.9559 | 0.5 |
| No log | 2.93 | 1250 | 0.1172 | 0.7368 | 0.77 | 0.7531 | 0.099 |
| No log | 2.93 | 1250 | 0.0624 | 0.88 | 0.88 | 0.88 | 0.3000 |
| No log | 2.93 | 1250 | 0.0098 | 0.9660 | 0.995 | 0.9803 | 0.4 |
| No log | 2.93 | 1250 | 0.0158 | 0.9569 | 1.0 | 0.9780 | 0.089 |
| No log | 2.93 | 1250 | 0.1054 | 0.9379 | 0.83 | 0.8806 | 0.4 |
| No log | 2.93 | 1250 | 0.0062 | 0.9803 | 0.995 | 0.9876 | 0.7000 |
| No log | 2.93 | 1250 | 0.1195 | 0.8404 | 0.895 | 0.8668 | 0.2 |
| No log | 2.93 | 1250 | 0.0110 | 0.9434 | 1.0 | 0.9709 | 0.5 |
| No log | 2.93 | 1250 | 0.0096 | 0.9662 | 1.0 | 0.9828 | 0.2 |
| No log | 2.93 | 1250 | 0.0089 | 0.9289 | 0.98 | 0.9538 | 0.7000 |
| No log | 2.93 | 1250 | 0.0469 | 0.8981 | 0.925 | 0.9113 | 0.5 |
| No log | 2.93 | 1250 | 0.0068 | 0.9660 | 0.995 | 0.9803 | 0.3000 |
| No log | 2.93 | 1250 | 0.0121 | 0.975 | 0.975 | 0.975 | 0.9 |
| No log | 2.93 | 1250 | 0.0088 | 0.9515 | 0.98 | 0.9655 | 0.8 |
| No log | 2.93 | 1250 | 0.0147 | 0.9567 | 0.995 | 0.9755 | 0.1 |
| No log | 2.93 | 1250 | 0.0082 | 0.9615 | 1.0 | 0.9804 | 0.3000 |
| No log | 2.93 | 1250 | 0.0314 | 0.9072 | 0.88 | 0.8934 | 0.3000 |
| No log | 2.93 | 1250 | 0.0562 | 0.8507 | 0.94 | 0.8931 | 0.2 |
| No log | 2.93 | 1250 | 0.0090 | 0.9369 | 0.965 | 0.9507 | 0.2 |
| No log | 2.93 | 1250 | 0.0617 | 0.8515 | 0.86 | 0.8557 | 0.6 |
| No log | 2.93 | 1250 | 0.0106 | 0.9612 | 0.99 | 0.9754 | 0.4 |
| No log | 2.93 | 1250 | 0.0152 | 0.9471 | 0.985 | 0.9657 | 0.6 |
| No log | 2.93 | 1250 | 0.0152 | 0.8489 | 0.955 | 0.8988 | 0.4 |
| No log | 2.93 | 1250 | 0.0076 | 0.9592 | 0.94 | 0.9495 | 0.4 |
| No log | 2.93 | 1250 | 0.0182 | 0.9519 | 0.99 | 0.9706 | 0.4 |
| No log | 2.93 | 1250 | 0.0138 | 0.9538 | 0.93 | 0.9418 | 0.4 |
| No log | 2.93 | 1250 | 0.0619 | 0.9436 | 0.92 | 0.9316 | 0.084 |
| No log | 2.93 | 1250 | 0.0237 | 0.8638 | 0.92 | 0.8910 | 0.4 |
| No log | 2.93 | 1250 | 0.0904 | 0.6464 | 0.8586 | 0.7375 | 0.002 |
| No log | 2.93 | 1250 | 0.0250 | 0.9559 | 0.975 | 0.9653 | 0.2 |
| No log | 2.93 | 1250 | 0.1178 | 0.8077 | 0.84 | 0.8235 | 0.3000 |
| No log | 2.93 | 1250 | 0.0186 | 0.9198 | 0.86 | 0.8889 | 0.7000 |
| No log | 2.93 | 1250 | 0.0663 | 0.7547 | 0.8 | 0.7767 | 0.4 |
| No log | 2.93 | 1250 | 0.0218 | 0.8974 | 0.875 | 0.8861 | 0.4 |
| No log | 2.93 | 1250 | 0.0739 | 0.8571 | 0.84 | 0.8485 | 0.4 |
| No log | 2.93 | 1250 | 0.0874 | 0.7802 | 0.905 | 0.8380 | 0.3000 |
| No log | 2.93 | 1250 | 0.0600 | 0.7122 | 0.4975 | 0.5858 | 0.6 |
| No log | 2.93 | 1250 | 0.0507 | 0.7939 | 0.905 | 0.8458 | 0.3000 |
| No log | 2.93 | 1250 | 0.0443 | 0.8095 | 0.935 | 0.8677 | 0.3000 |
| No log | 2.93 | 1250 | 0.0917 | 0.7689 | 0.865 | 0.8141 | 0.3000 |
| No log | 2.93 | 1250 | 0.0432 | 0.8443 | 0.895 | 0.8689 | 0.5 |
| No log | 2.93 | 1250 | 0.0252 | 0.9072 | 0.88 | 0.8934 | 0.6 |
| No log | 2.93 | 1250 | 0.0664 | 0.7788 | 0.845 | 0.8106 | 0.4 |
| No log | 2.93 | 1250 | 0.0598 | 0.8679 | 0.92 | 0.8932 | 0.4 |
| No log | 2.93 | 1250 | 0.0567 | 0.9021 | 0.875 | 0.8883 | 0.7000 |
| No log | 2.93 | 1250 | 0.0465 | 0.8122 | 0.865 | 0.8378 | 0.5 |
| No log | 2.93 | 1250 | 0.0344 | 0.8789 | 0.8392 | 0.8586 | 0.6 |
| No log | 2.93 | 1250 | 0.0602 | 0.7277 | 0.815 | 0.7689 | 0.4 |
| No log | 2.93 | 1250 | 0.0737 | 0.7929 | 0.785 | 0.7889 | 0.6 |
| No log | 2.93 | 1250 | 0.0569 | 0.8763 | 0.85 | 0.8629 | 0.6 |
| No log | 2.93 | 1250 | 0.0428 | 0.8157 | 0.885 | 0.8489 | 0.3000 |
| No log | 2.93 | 1250 | 0.1329 | 0.8458 | 0.85 | 0.8479 | 0.2 |
| No log | 2.93 | 1250 | 0.0249 | 0.7963 | 0.86 | 0.8269 | 0.4 |
| No log | 2.93 | 1250 | 0.0321 | 0.8990 | 0.9036 | 0.9013 | 0.2 |
| No log | 2.93 | 1250 | 0.0664 | 0.8246 | 0.94 | 0.8785 | 0.3000 |
| No log | 2.93 | 1250 | 0.0761 | 0.7673 | 0.7789 | 0.7731 | 0.4 |
| No log | 2.93 | 1250 | 0.0222 | 0.7874 | 0.815 | 0.8010 | 0.3000 |
| No log | 2.93 | 1250 | 0.0843 | 0.7397 | 0.895 | 0.8100 | 0.4 |
| No log | 2.93 | 1250 | 0.0317 | 0.8827 | 0.79 | 0.8338 | 0.6 |
| No log | 2.93 | 1250 | 0.0608 | 0.8696 | 0.8 | 0.8333 | 0.6 |
| No log | 2.93 | 1250 | 0.0715 | 0.8705 | 0.84 | 0.8550 | 0.6 |
| No log | 2.93 | 1250 | 0.1113 | 0.7425 | 0.865 | 0.7991 | 0.4 |
| No log | 2.93 | 1250 | 0.0726 | 0.8263 | 0.785 | 0.8051 | 0.4 |
| No log | 2.93 | 1250 | 0.0607 | 0.8244 | 0.845 | 0.8346 | 0.5 |
| No log | 2.93 | 1250 | 0.0487 | 0.8054 | 0.8945 | 0.8476 | 0.4 |
| No log | 2.93 | 1250 | 0.1693 | 0.6640 | 0.82 | 0.7338 | 0.093 |
| No log | 2.93 | 1250 | 0.0755 | 0.5393 | 0.515 | 0.5269 | 0.4 |
| No log | 2.93 | 1250 | 0.0950 | 0.9140 | 0.85 | 0.8808 | 0.4 |
| No log | 2.93 | 1250 | 0.2408 | 0.3272 | 0.8040 | 0.4651 | 0.001 |
| No log | 2.93 | 1250 | 0.0749 | 0.8852 | 0.8141 | 0.8482 | 0.4 |
| No log | 2.93 | 1250 | 0.0854 | 0.7284 | 0.885 | 0.7991 | 0.3000 |
| No log | 2.93 | 1250 | 0.0253 | 0.9278 | 0.9091 | 0.9184 | 0.2 |
| No log | 2.93 | 1250 | 0.0653 | 0.9137 | 0.9 | 0.9068 | 0.5 |
| No log | 2.93 | 1250 | 0.0265 | 0.86 | 0.8643 | 0.8622 | 0.4 |
| No log | 2.93 | 1250 | 0.0351 | 0.9198 | 0.745 | 0.8232 | 0.8 |
| No log | 2.93 | 1250 | 0.0405 | 0.7462 | 0.735 | 0.7406 | 0.5 |
| No log | 2.93 | 1250 | 0.0618 | 0.8366 | 0.845 | 0.8408 | 0.6 |
| No log | 2.93 | 1250 | 0.0598 | 0.6314 | 0.865 | 0.7300 | 0.3000 |
| No log | 2.93 | 1250 | 0.0537 | 0.9272 | 0.955 | 0.9409 | 0.3000 |
| No log | 2.93 | 1250 | 0.0533 | 0.8930 | 0.835 | 0.8630 | 0.7000 |
| No log | 2.93 | 1250 | 0.1265 | 0.7054 | 0.85 | 0.7710 | 0.4 |
| No log | 2.93 | 1250 | 0.0204 | 0.8596 | 0.7387 | 0.7946 | 0.8 |
| No log | 2.93 | 1250 | 0.1870 | 0.5634 | 0.7588 | 0.6467 | 0.025 |
| No log | 2.93 | 1250 | 0.0949 | 0.8796 | 0.84 | 0.8593 | 0.5 |
| No log | 2.93 | 1250 | 0.0499 | 0.8424 | 0.855 | 0.8486 | 0.6 |
| No log | 2.93 | 1250 | 0.0597 | 0.8725 | 0.89 | 0.8812 | 0.5 |
| No log | 2.93 | 1250 | 0.0988 | 0.8098 | 0.83 | 0.8198 | 0.2 |
| No log | 2.93 | 1250 | 0.0557 | 0.8939 | 0.8 | 0.8443 | 0.7000 |
| No log | 2.93 | 1250 | 0.1298 | 0.7391 | 0.935 | 0.8256 | 0.084 |
| No log | 2.93 | 1250 | 0.0369 | 0.7934 | 0.96 | 0.8688 | 0.4 |
| No log | 2.93 | 1250 | 0.0567 | 0.8737 | 0.865 | 0.8693 | 0.5 |
| No log | 2.93 | 1250 | 0.0158 | 0.8638 | 0.92 | 0.8910 | 0.5 |
| No log | 2.93 | 1250 | 0.0956 | 0.8 | 0.82 | 0.8099 | 0.5 |
| No log | 2.93 | 1250 | 0.0361 | 0.8641 | 0.89 | 0.8768 | 0.6 |
| No log | 2.93 | 1250 | 0.0573 | 0.8796 | 0.84 | 0.8593 | 0.6 |
| No log | 2.93 | 1250 | 0.0363 | 0.8836 | 0.835 | 0.8586 | 0.6 |
| No log | 2.93 | 1250 | 0.0881 | 0.815 | 0.815 | 0.815 | 0.5 |
| No log | 2.93 | 1250 | 0.0366 | 0.905 | 0.905 | 0.905 | 0.6 |
| No log | 2.93 | 1250 | 0.0707 | 0.7111 | 0.64 | 0.6737 | 0.3000 |
| No log | 2.93 | 1250 | 0.1180 | 0.8198 | 0.705 | 0.7581 | 0.5 |
| No log | 2.93 | 1250 | 0.0270 | 0.8889 | 0.84 | 0.8638 | 0.4 |
| No log | 2.93 | 1250 | 0.0707 | 0.8474 | 0.805 | 0.8256 | 0.7000 |
| No log | 2.93 | 1250 | 0.0879 | 0.8564 | 0.805 | 0.8299 | 0.6 |
| No log | 2.93 | 1250 | 0.0520 | 0.8646 | 0.83 | 0.8469 | 0.7000 |
| No log | 2.93 | 1250 | 0.0237 | 0.8744 | 0.87 | 0.8722 | 0.6 |
| No log | 2.93 | 1250 | 0.0159 | 0.8667 | 0.91 | 0.8878 | 0.3000 |
| No log | 2.93 | 1250 | 0.1066 | 0.8010 | 0.785 | 0.7929 | 0.6 |
| No log | 2.93 | 1250 | 0.0601 | 0.6872 | 0.67 | 0.6785 | 0.4 |
| No log | 2.93 | 1250 | 0.0887 | 0.8762 | 0.92 | 0.8976 | 0.068 |
| No log | 2.93 | 1250 | 0.1065 | 0.4278 | 0.77 | 0.55 | 0.046 |
| No log | 2.93 | 1250 | 0.0610 | 0.7568 | 0.8485 | 0.8000 | 0.093 |
| No log | 2.93 | 1250 | 0.0639 | 0.8906 | 0.855 | 0.8724 | 0.3000 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 2.93 | 1250 | 0.0093 | 0.7946 | 0.9036 | 0.8456 | 0.6 |
| No log | 2.93 | 1250 | 0.0035 | 0.9657 | 0.985 | 0.9752 | 0.5 |
| No log | 2.93 | 1250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.005 |
| No log | 2.93 | 1250 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.0430 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.016 |
| No log | 2.93 | 1250 | 0.0024 | 0.9947 | 1.0 | 0.9973 | 0.058 |
| No log | 2.93 | 1250 | 0.0012 | 1.0 | 0.99 | 0.9950 | 0.6 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 2.93 | 1250 | 0.0032 | 0.9949 | 0.985 | 0.9899 | 0.9 |
| No log | 2.93 | 1250 | 0.0006 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 2.93 | 1250 | 0.0092 | 0.975 | 0.975 | 0.975 | 0.005 |
| No log | 2.93 | 1250 | 0.0168 | 0.9894 | 0.935 | 0.9614 | 0.079 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.076 |
| No log | 2.93 | 1250 | 0.0155 | 0.9742 | 0.945 | 0.9594 | 0.2 |
| No log | 2.93 | 1250 | 0.0008 | 0.9950 | 1.0 | 0.9975 | 0.021 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.006 |
| No log | 2.93 | 1250 | 0.0030 | 0.9949 | 0.985 | 0.9899 | 0.3000 |
| No log | 2.93 | 1250 | 0.0012 | 0.995 | 0.995 | 0.995 | 0.4 |
| No log | 2.93 | 1250 | 0.0035 | 0.9704 | 0.985 | 0.9777 | 0.5 |
| No log | 2.93 | 1250 | 0.0359 | 0.9218 | 0.825 | 0.8707 | 0.7000 |
| No log | 2.93 | 1250 | 0.0005 | 0.9950 | 1.0 | 0.9975 | 0.016 |
| No log | 2.93 | 1250 | 0.0014 | 1.0 | 0.985 | 0.9924 | 0.4 |
| No log | 2.93 | 1250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.0100 |
| No log | 2.93 | 1250 | 0.0005 | 1.0 | 0.995 | 0.9975 | 0.7000 |
| No log | 2.93 | 1250 | 0.0011 | 0.995 | 0.995 | 0.995 | 0.3000 |
| No log | 2.93 | 1250 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.096 |
| No log | 2.93 | 1250 | 0.0049 | 0.9375 | 0.975 | 0.9559 | 0.5 |
| No log | 2.93 | 1250 | 0.0020 | 0.9901 | 1.0 | 0.9950 | 0.07 |
| No log | 2.93 | 1250 | 0.0183 | 0.9796 | 0.96 | 0.9697 | 0.0090 |
| No log | 2.93 | 1250 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 2.93 | 1250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.004 |
| No log | 2.93 | 1250 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 2.93 | 1250 | 0.0020 | 0.9901 | 1.0 | 0.9950 | 0.2 |
| No log | 2.93 | 1250 | 0.0005 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 2.93 | 1250 | 0.0049 | 0.9792 | 1.0 | 0.9895 | 0.023 |
| No log | 2.93 | 1250 | 0.0037 | 0.9423 | 0.98 | 0.9608 | 0.2 |
| No log | 2.93 | 1250 | 0.0042 | 0.9804 | 1.0 | 0.9901 | 0.5 |
| No log | 2.93 | 1250 | 0.0010 | 0.9950 | 1.0 | 0.9975 | 0.6 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 2.93 | 1250 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 2.93 | 1250 | 0.0016 | 0.9950 | 0.99 | 0.9925 | 0.074 |
| No log | 2.93 | 1250 | 0.0178 | 0.9310 | 0.9545 | 0.9426 | 0.7000 |
| No log | 2.93 | 1250 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.8 |
| No log | 2.93 | 1250 | 0.0022 | 0.99 | 0.99 | 0.99 | 0.7000 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.004 |
| No log | 2.93 | 1250 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.069 |
| No log | 2.93 | 1250 | 0.0154 | 0.9346 | 1.0 | 0.9662 | 0.005 |
| No log | 2.93 | 1250 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.5 |
| No log | 2.93 | 1250 | 0.0169 | 0.9378 | 0.905 | 0.9211 | 0.5 |
| No log | 2.93 | 1250 | 0.0040 | 0.9950 | 1.0 | 0.9975 | 0.002 |
| No log | 2.93 | 1250 | 0.0010 | 0.9950 | 1.0 | 0.9975 | 0.8 |
| No log | 2.93 | 1250 | 0.0011 | 0.9901 | 1.0 | 0.9950 | 0.078 |
| No log | 2.93 | 1250 | 0.0005 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 2.93 | 1250 | 0.0121 | 0.7941 | 0.945 | 0.8630 | 0.0430 |
| No log | 2.93 | 1250 | 0.0116 | 1.0 | 0.99 | 0.9950 | 0.0180 |
| No log | 2.93 | 1250 | 0.0068 | 0.98 | 0.98 | 0.98 | 0.2 |
| No log | 2.93 | 1250 | 0.0161 | 0.7083 | 0.8629 | 0.7780 | 0.5 |
| No log | 2.93 | 1250 | 0.0064 | 0.9147 | 0.965 | 0.9392 | 0.3000 |
| No log | 2.93 | 1250 | 0.0376 | 0.9171 | 0.885 | 0.9008 | 0.6 |
| No log | 2.93 | 1250 | 0.1399 | 0.5974 | 0.8214 | 0.6917 | 0.055 |
| No log | 2.93 | 1250 | 0.0056 | 0.955 | 0.955 | 0.955 | 0.6 |
| No log | 2.93 | 1250 | 0.0888 | 0.7885 | 0.8723 | 0.8283 | 0.3000 |
| No log | 2.93 | 1250 | 0.0270 | 0.8421 | 0.96 | 0.8972 | 0.2 |
| No log | 2.93 | 1250 | 0.0279 | 0.8872 | 0.865 | 0.8759 | 0.4 |
| No log | 2.93 | 1250 | 0.0275 | 0.8786 | 0.905 | 0.8916 | 0.3000 |
| No log | 2.93 | 1250 | 0.0257 | 0.9113 | 0.925 | 0.9181 | 0.6 |
| No log | 2.93 | 1250 | 0.0236 | 0.9451 | 0.86 | 0.9005 | 0.5 |
| No log | 2.93 | 1250 | 0.0325 | 0.8950 | 0.81 | 0.8504 | 0.5 |
| No log | 2.93 | 1250 | 0.0183 | 0.8233 | 0.885 | 0.8530 | 0.4 |
| No log | 2.93 | 1250 | 0.0395 | 0.8848 | 0.73 | 0.8 | 0.7000 |
| No log | 2.93 | 1250 | 0.0487 | 0.8614 | 0.87 | 0.8657 | 0.4 |
| No log | 2.93 | 1250 | 0.0022 | 0.9950 | 1.0 | 0.9975 | 0.08 |
| No log | 2.93 | 1250 | 0.0322 | 0.8846 | 0.805 | 0.8429 | 0.7000 |
| No log | 2.93 | 1250 | 0.0265 | 0.8309 | 0.86 | 0.8452 | 0.3000 |
| No log | 2.93 | 1250 | 0.0280 | 0.7961 | 0.8283 | 0.8119 | 0.4 |
| No log | 2.93 | 1250 | 0.0623 | 0.7317 | 0.7538 | 0.7426 | 0.5 |
| No log | 2.93 | 1250 | 0.0245 | 0.8173 | 0.85 | 0.8333 | 0.3000 |
| No log | 2.93 | 1250 | 0.0182 | 0.9010 | 0.91 | 0.9055 | 0.3000 |
| No log | 2.93 | 1250 | 0.0065 | 0.9646 | 0.955 | 0.9598 | 0.5 |
| No log | 2.93 | 1250 | 0.0314 | 0.8168 | 0.825 | 0.8209 | 0.6 |
| No log | 2.93 | 1250 | 0.0336 | 0.7965 | 0.9 | 0.8451 | 0.2 |
| No log | 2.93 | 1250 | 0.0281 | 0.9115 | 0.875 | 0.8929 | 0.6 |
| No log | 2.93 | 1250 | 0.0238 | 0.7441 | 0.785 | 0.7640 | 0.4 |
| No log | 2.93 | 1250 | 0.0272 | 0.8429 | 0.885 | 0.8634 | 0.4 |
| No log | 2.93 | 1250 | 0.0300 | 0.9113 | 0.925 | 0.9181 | 0.085 |
| No log | 2.93 | 1250 | 0.0641 | 0.7423 | 0.72 | 0.7310 | 0.4 |
| No log | 2.93 | 1250 | 0.0104 | 0.9899 | 0.98 | 0.9849 | 0.3000 |
| No log | 2.93 | 1250 | 0.0012 | 0.9901 | 1.0 | 0.9950 | 0.015 |
| No log | 2.93 | 1250 | 0.0147 | 0.9652 | 0.97 | 0.9676 | 0.3000 |
| No log | 2.93 | 1250 | 0.0431 | 0.8069 | 0.815 | 0.8109 | 0.3000 |
| No log | 2.93 | 1250 | 0.0464 | 0.7357 | 0.835 | 0.7822 | 0.4 |
| No log | 2.93 | 1250 | 0.1084 | 0.6739 | 0.6596 | 0.6667 | 0.5 |
| No log | 2.93 | 1250 | 0.0323 | 0.8171 | 0.67 | 0.7363 | 0.5 |
| No log | 2.93 | 1250 | 0.0135 | 0.9495 | 0.94 | 0.9447 | 0.5 |
| No log | 2.93 | 1250 | 0.0255 | 0.9122 | 0.935 | 0.9235 | 0.4 |
| No log | 2.93 | 1250 | 0.0035 | 0.9949 | 0.985 | 0.9899 | 0.7000 |
| No log | 2.93 | 1250 | 0.0263 | 0.8645 | 0.925 | 0.8937 | 0.2 |
| No log | 2.93 | 1250 | 0.0308 | 0.6466 | 0.805 | 0.7171 | 0.4 |
| No log | 2.93 | 1250 | 0.0650 | 0.7286 | 0.7846 | 0.7556 | 0.4 |
| No log | 2.93 | 1250 | 0.0204 | 0.9347 | 0.93 | 0.9323 | 0.4 |
| No log | 2.93 | 1250 | 0.0202 | 0.8973 | 0.83 | 0.8623 | 0.7000 |
| No log | 2.93 | 1250 | 0.0026 | 1.0 | 1.0 | 1.0 | 0.4 |
| No log | 2.93 | 1250 | 0.0106 | 0.9444 | 0.935 | 0.9397 | 0.5 |
| No log | 2.93 | 1250 | 0.0335 | 0.9206 | 0.87 | 0.8946 | 0.5 |
| No log | 2.93 | 1250 | 0.0056 | 0.9583 | 0.9583 | 0.9583 | 0.4 |
| No log | 2.93 | 1250 | 0.0179 | 0.9508 | 0.87 | 0.9086 | 0.6 |
| No log | 2.93 | 1250 | 0.0683 | 0.8579 | 0.815 | 0.8359 | 0.3000 |
| No log | 2.93 | 1250 | 0.0353 | 0.7524 | 0.8061 | 0.7783 | 0.4 |
| No log | 2.93 | 1250 | 0.0034 | 0.9852 | 1.0 | 0.9926 | 0.4 |
| No log | 2.93 | 1250 | 0.0788 | 0.775 | 0.775 | 0.775 | 0.5 |
| No log | 2.93 | 1250 | 0.0367 | 0.5762 | 0.435 | 0.4957 | 0.3000 |
| No log | 2.93 | 1250 | 0.0292 | 0.9189 | 0.85 | 0.8831 | 0.2 |
| No log | 2.93 | 1250 | 0.1175 | 0.6840 | 0.79 | 0.7332 | 0.3000 |
| No log | 2.93 | 1250 | 0.0845 | 0.6380 | 0.705 | 0.6698 | 0.5 |
| No log | 2.93 | 1250 | 0.0696 | 0.6443 | 0.815 | 0.7196 | 0.0730 |
| No log | 2.93 | 1250 | 0.1093 | 0.7011 | 0.915 | 0.7939 | 0.04 |
| No log | 2.93 | 1250 | 0.0433 | 0.6906 | 0.96 | 0.8033 | 0.2 |
| No log | 2.93 | 1250 | 0.0390 | 0.7955 | 0.875 | 0.8333 | 0.4 |
| No log | 2.93 | 1250 | 0.0451 | 0.4958 | 0.5930 | 0.5400 | 0.3000 |
| No log | 2.93 | 1250 | 0.0529 | 0.7949 | 0.775 | 0.7848 | 0.4 |
| No log | 2.93 | 1250 | 0.0531 | 0.6389 | 0.8364 | 0.7244 | 0.4 |
| No log | 2.93 | 1250 | 0.0401 | 0.8246 | 0.87 | 0.8467 | 0.3000 |
| No log | 2.93 | 1250 | 0.0401 | 0.8246 | 0.87 | 0.8467 | 0.3000 |
| No log | 2.93 | 1250 | 0.0398 | 0.7991 | 0.855 | 0.8261 | 0.3000 |
| No log | 2.93 | 1250 | 0.0521 | 0.8054 | 0.89 | 0.8456 | 0.2 |
| No log | 2.93 | 1250 | 0.0352 | 0.7655 | 0.865 | 0.8122 | 0.2 |
| No log | 2.93 | 1250 | 0.0392 | 0.8308 | 0.81 | 0.8203 | 0.5 |
| No log | 2.93 | 1250 | 0.0879 | 0.7026 | 0.815 | 0.7546 | 0.2 |
| No log | 2.93 | 1250 | 0.0456 | 0.7571 | 0.795 | 0.7756 | 0.4 |
| No log | 2.93 | 1250 | 0.0443 | 0.8009 | 0.905 | 0.8498 | 0.2 |
| No log | 2.93 | 1250 | 0.1638 | 0.6618 | 0.675 | 0.6683 | 0.2 |
| No log | 2.93 | 1250 | 0.0513 | 0.8075 | 0.755 | 0.7804 | 0.5 |
| No log | 2.93 | 1250 | 0.1173 | 0.7004 | 0.8342 | 0.7615 | 0.078 |
| No log | 2.93 | 1250 | 0.0355 | 0.8488 | 0.87 | 0.8593 | 0.3000 |
| No log | 2.93 | 1250 | 0.0355 | 0.8488 | 0.87 | 0.8593 | 0.3000 |
| No log | 2.93 | 1250 | 0.0278 | 0.8611 | 0.6739 | 0.7561 | 0.6 |
| No log | 2.93 | 1250 | 0.0278 | 0.8611 | 0.6739 | 0.7561 | 0.6 |
| No log | 2.93 | 1250 | 0.0372 | 0.8556 | 0.8 | 0.8269 | 0.4 |
| No log | 2.93 | 1250 | 0.0436 | 0.5326 | 0.49 | 0.5104 | 0.4 |
| No log | 2.93 | 1250 | 0.0543 | 0.4583 | 0.8462 | 0.5946 | 0.079 |
| No log | 2.93 | 1250 | 0.0329 | 0.7571 | 0.795 | 0.7756 | 0.5 |
| No log | 2.93 | 1250 | 0.0363 | 0.6199 | 0.685 | 0.6508 | 0.4 |
| No log | 2.93 | 1250 | 0.0533 | 0.7336 | 0.785 | 0.7585 | 0.3000 |
| No log | 2.93 | 1250 | 0.0408 | 0.7703 | 0.805 | 0.7873 | 0.6 |
| No log | 2.93 | 1250 | 0.0598 | 0.7075 | 0.895 | 0.7903 | 0.2 |
| No log | 2.93 | 1250 | 0.1094 | 0.4933 | 0.3558 | 0.4134 | 0.3000 |
| No log | 2.93 | 1250 | 0.0787 | 0.7178 | 0.865 | 0.7846 | 0.2 |
| No log | 2.93 | 1250 | 0.0527 | 0.7682 | 0.845 | 0.8048 | 0.5 |
| No log | 2.93 | 1250 | 0.1015 | 0.7677 | 0.76 | 0.7638 | 0.7000 |
| No log | 2.93 | 1250 | 0.1224 | 0.7130 | 0.82 | 0.7628 | 0.8 |
| No log | 2.93 | 1250 | 0.0743 | 0.6326 | 0.835 | 0.7198 | 0.2 |
| No log | 2.93 | 1250 | 0.1311 | 0.7948 | 0.91 | 0.8485 | 0.066 |
| No log | 2.93 | 1250 | 0.1690 | 0.3813 | 0.53 | 0.4435 | 0.007 |
| No log | 2.93 | 1250 | 0.0683 | 0.5067 | 0.565 | 0.5343 | 0.3000 |
| No log | 2.93 | 1250 | 0.0764 | 0.7846 | 0.965 | 0.8655 | 0.5 |
| No log | 2.93 | 1250 | 0.0525 | 0.4941 | 0.625 | 0.5519 | 0.035 |
| No log | 2.93 | 1250 | 0.0609 | 0.7719 | 0.88 | 0.8224 | 0.2 |
| No log | 2.93 | 1250 | 0.0620 | 1.0 | 0.6667 | 0.8 | 0.6 |
| No log | 2.93 | 1250 | 0.0491 | 0.7218 | 0.895 | 0.7991 | 0.2 |
| No log | 2.93 | 1250 | 0.0400 | 0.8543 | 0.85 | 0.8521 | 0.4 |
| No log | 2.93 | 1250 | 0.0685 | 0.5930 | 0.845 | 0.6969 | 0.08 |
| No log | 2.93 | 1250 | 0.0483 | 0.7837 | 0.815 | 0.7990 | 0.4 |
| No log | 2.93 | 1250 | 0.0501 | 0.5163 | 0.635 | 0.5695 | 0.4 |
| No log | 2.93 | 1250 | 0.0281 | 0.8837 | 0.95 | 0.9157 | 0.2 |
| No log | 2.93 | 1250 | 0.0968 | 0.4542 | 0.57 | 0.5055 | 0.5 |
| No log | 2.93 | 1250 | 0.0514 | 0.7824 | 0.8492 | 0.8145 | 0.4 |
| No log | 2.93 | 1250 | 0.1047 | 0.5730 | 0.4951 | 0.5312 | 0.3000 |
| No log | 2.93 | 1250 | 0.1028 | 0.4599 | 0.63 | 0.5316 | 0.0190 |
| No log | 2.93 | 1250 | 0.0395 | 0.8634 | 0.885 | 0.8741 | 0.5 |
| No log | 2.93 | 1250 | 0.0624 | 0.7727 | 0.935 | 0.8462 | 0.099 |
| No log | 2.93 | 1250 | 0.0624 | 0.7727 | 0.935 | 0.8462 | 0.099 |
| No log | 2.93 | 1250 | 0.0479 | 0.7011 | 0.61 | 0.6524 | 0.3000 |
| No log | 2.93 | 1250 | 0.0632 | 0.8026 | 0.9242 | 0.8592 | 0.3000 |
| No log | 2.93 | 1250 | 0.0469 | 0.6113 | 0.8693 | 0.7178 | 0.2 |
| No log | 2.93 | 1250 | 0.0531 | 0.7580 | 0.83 | 0.7924 | 0.4 |
| No log | 2.93 | 1250 | 0.0487 | 0.8122 | 0.8 | 0.8060 | 0.3000 |
| No log | 2.93 | 1250 | 0.0417 | 0.7752 | 0.845 | 0.8086 | 0.3000 |
| No log | 2.93 | 1250 | 0.0516 | 0.7031 | 0.805 | 0.7506 | 0.6 |
| No log | 2.93 | 1250 | 0.0501 | 0.6760 | 0.97 | 0.7967 | 0.085 |
| No log | 2.93 | 1250 | 0.0404 | 0.8233 | 0.885 | 0.8530 | 0.3000 |
| No log | 2.93 | 1250 | 0.0511 | 0.7677 | 0.76 | 0.7638 | 0.5 |
| No log | 2.93 | 1250 | 0.0323 | 0.8601 | 0.83 | 0.8448 | 0.4 |
| No log | 2.93 | 1250 | 0.0683 | 0.7761 | 0.78 | 0.7781 | 0.3000 |
| No log | 2.93 | 1250 | 0.0529 | 0.4094 | 0.7 | 0.5166 | 0.0730 |
| No log | 2.93 | 1250 | 0.0599 | 0.5579 | 0.675 | 0.6109 | 0.5 |
| No log | 2.93 | 1250 | 0.0541 | 0.7020 | 0.895 | 0.7868 | 0.0510 |
| No log | 2.93 | 1250 | 0.0635 | 0.6109 | 0.785 | 0.6871 | 0.6 |
| No log | 2.93 | 1250 | 0.0579 | 0.7713 | 0.86 | 0.8132 | 0.4 |
| No log | 2.93 | 1250 | 0.0394 | 0.7682 | 0.845 | 0.8048 | 0.3000 |
| No log | 2.93 | 1250 | 0.1085 | 0.5036 | 0.69 | 0.5823 | 0.3000 |
| No log | 2.93 | 1250 | 0.0718 | 0.7861 | 0.79 | 0.7880 | 0.4 |
| No log | 2.93 | 1250 | 0.0482 | 0.6516 | 0.795 | 0.7162 | 0.4 |
| No log | 2.93 | 1250 | 0.0482 | 0.6516 | 0.795 | 0.7162 | 0.4 |
| No log | 2.93 | 1250 | 0.0482 | 0.6516 | 0.795 | 0.7162 | 0.4 |
| No log | 2.93 | 1250 | 0.0482 | 0.6516 | 0.795 | 0.7162 | 0.4 |
| No log | 2.93 | 1250 | 0.1464 | 0.4810 | 0.5758 | 0.5241 | 0.0720 |
| No log | 2.93 | 1250 | 0.0629 | 0.7465 | 0.8182 | 0.7807 | 0.6 |
| No log | 2.93 | 1250 | 0.0212 | 0.9314 | 0.95 | 0.9406 | 0.2 |
| No log | 2.93 | 1250 | 0.0020 | 0.9901 | 1.0 | 0.9950 | 0.3000 |
| No log | 2.93 | 1250 | 0.0029 | 1.0 | 0.995 | 0.9975 | 0.3000 |
| No log | 2.93 | 1250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 2.93 | 1250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.066 |
| No log | 2.93 | 1250 | 0.0004 | 1.0 | 0.995 | 0.9975 | 0.6 |
| No log | 2.93 | 1250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.067 |
| No log | 2.93 | 1250 | 0.0031 | 0.9851 | 0.99 | 0.9875 | 0.8 |
| No log | 2.93 | 1250 | 0.0006 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 2.93 | 1250 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 2.93 | 1250 | 0.0212 | 0.9541 | 0.935 | 0.9444 | 0.096 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.048 |
| No log | 2.93 | 1250 | 0.0418 | 0.8018 | 0.89 | 0.8436 | 0.0600 |
| No log | 2.93 | 1250 | 0.0017 | 0.9901 | 1.0 | 0.9950 | 0.0180 |
| No log | 2.93 | 1250 | 0.0004 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 2.93 | 1250 | 0.0034 | 0.9851 | 0.995 | 0.9900 | 0.9 |
| No log | 2.93 | 1250 | 0.0056 | 0.9653 | 0.975 | 0.9701 | 0.6 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.0220 |
| No log | 2.93 | 1250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.077 |
| No log | 2.93 | 1250 | 0.0024 | 0.9900 | 0.995 | 0.9925 | 0.2 |
| No log | 2.93 | 1250 | 0.0007 | 1.0 | 1.0 | 1.0 | 0.5 |
| No log | 2.93 | 1250 | 0.0200 | 0.9122 | 0.935 | 0.9235 | 0.025 |
| No log | 2.93 | 1250 | 0.1241 | 0.4107 | 0.575 | 0.4792 | 0.7000 |
| No log | 2.93 | 1250 | 0.0958 | 0.3934 | 0.3310 | 0.3596 | 0.4 |
| No log | 2.93 | 1250 | 0.1214 | 0.6587 | 0.685 | 0.6716 | 0.3000 |
| No log | 2.93 | 1250 | 0.1157 | 0.6058 | 0.73 | 0.6621 | 0.4 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1"], "model-index": [{"name": "v2-WtP-FT-12L-256BS-UD-Opus-cUD-cOpus", "results": []}]}
|
igorsterner/v2-WtP-FT-12L-256BS-UD-Opus-cUD-cOpus
| null |
[
"transformers",
"safetensors",
"xlm-token",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:37:55+00:00
|
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
adammoss/gpt-pretrain-lm-sn25
| null |
[
"transformers",
"safetensors",
"gptmodel",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:37:58+00:00
|
null | null |
{"license": "openrail"}
|
Sognar/MaroModel
| null |
[
"license:openrail",
"region:us"
] | null |
2024-04-23T21:39:37+00:00
|
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4455
- Accuracy: 0.4297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3758 | 0.3665 |
| 1.4111 | 2.0 | 776 | 1.3400 | 0.4077 |
| 1.0525 | 3.0 | 1164 | 1.4455 | 0.4297 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]}
|
Mouzer/clasificador-muchocine
| null |
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:40:02+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4385
- Accuracy: 0.4413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3293 | 0.3948 |
| 1.3934 | 2.0 | 776 | 1.3066 | 0.4116 |
| 1.0283 | 3.0 | 1164 | 1.4385 | 0.4413 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]}
|
Anagmedina/clasificador-muchocine
| null |
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:40:26+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-analysis-model
This model is a fine-tuned version of [distilbert/distilbert-base-multilingual-cased](https://huggingface.co/distilbert/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4125
- Accuracy: 0.8433
- Precision: 0.8181
- Recall: 0.8433
- F1: 0.8155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5521 | 1.0 | 4574 | 0.4900 | 0.8093 | 0.8041 | 0.8093 | 0.7833 |
| 0.4772 | 2.0 | 9148 | 0.4125 | 0.8433 | 0.8181 | 0.8433 | 0.8155 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall", "f1"], "base_model": "distilbert/distilbert-base-multilingual-cased", "model-index": [{"name": "sentiment-analysis-model", "results": []}]}
|
annavtkn/sentiment-analysis-model
| null |
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:40:26+00:00
|
null | null |
{}
|
TH78/freddieking
| null |
[
"region:us"
] | null |
2024-04-23T21:40:40+00:00
|
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4652
- Accuracy: 0.4426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3703 | 0.3884 |
| 1.3806 | 2.0 | 776 | 1.3091 | 0.4245 |
| 0.9712 | 3.0 | 1164 | 1.4652 | 0.4426 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]}
|
AboGeek/clasificador-muchocine
| null |
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:40:44+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4140
- Accuracy: 0.4477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3447 | 0.3948 |
| 1.4031 | 2.0 | 776 | 1.2922 | 0.4219 |
| 1.0011 | 3.0 | 1164 | 1.4140 | 0.4477 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]}
|
Jhosx/clasificador-muchocine
| null |
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:40:52+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4294
- Accuracy: 0.4310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3438 | 0.4142 |
| 1.391 | 2.0 | 776 | 1.3130 | 0.4219 |
| 1.0162 | 3.0 | 1164 | 1.4294 | 0.4310 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]}
|
mmarquez/clasificador-muchocine
| null |
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:41:06+00:00
|
text-generation
|
transformers
|
# meta-llama/Meta-Llama-3-8B-Instruct AWQ
- Model creator: [meta-llama](https://huggingface.co/meta-llama)
- Original model: [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/Meta-Llama-3-8B-Instruct-AWQ"
system_message = "You are Meta-Llama-3-8B-Instruct, incarnated as a powerful AI. You were created by meta-llama."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
|
{"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
|
solidrust/Meta-Llama-3-8B-Instruct-AWQ
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"conversational",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T21:41:21+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"tags": ["classification", "generated_from_trainer"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]}
|
mariaesther/clasificador-muchocine
| null |
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:41:25+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4404
- Accuracy: 0.4310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3463 | 0.4103 |
| 1.3791 | 2.0 | 776 | 1.3135 | 0.4245 |
| 0.9907 | 3.0 | 1164 | 1.4404 | 0.4310 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]}
|
qwerasd-qweasd/clasificador-muchocine
| null |
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:41:33+00:00
|
null | null |
{}
|
stafdif/Aika
| null |
[
"region:us"
] | null |
2024-04-23T21:41:55+00:00
|
|
text-to-image
|
diffusers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "diffusers"}
|
rubbrband/awpainting_v11
| null |
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null |
2024-04-23T21:42:23+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4711
- Accuracy: 0.44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3508 | 0.4052 |
| 1.376 | 2.0 | 776 | 1.3100 | 0.4232 |
| 0.9589 | 3.0 | 1164 | 1.4711 | 0.44 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]}
|
Arckmonde/clasificador-muchocine
| null |
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:42:47+00:00
|
null | null |
{"license": "apache-2.0"}
|
ruibatalabs/drone-inspector
| null |
[
"license:apache-2.0",
"region:us"
] | null |
2024-04-23T21:42:48+00:00
|
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1523
- Accuracy: 0.4490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.5191 | 0.4529 |
| 0.8375 | 2.0 | 776 | 1.7402 | 0.4387 |
| 0.5269 | 3.0 | 1164 | 2.1523 | 0.4490 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]}
|
prissila/clasificador-muchocine
| null |
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:42:53+00:00
|
text-generation
|
transformers
|
# Uploaded model
- **Developed by:** baris-yazici
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/mistral-7b-bnb-4bit"}
|
baris-yazici/mistral7b_fake_news_detect
| null |
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:43:08+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4601
- Accuracy: 0.4297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3781 | 0.3716 |
| 1.3815 | 2.0 | 776 | 1.3322 | 0.4155 |
| 1.0246 | 3.0 | 1164 | 1.4601 | 0.4297 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]}
|
elwilnor/clasificador-muchocine
| null |
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:43:39+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4140
- Accuracy: 0.4477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3447 | 0.3948 |
| 1.4031 | 2.0 | 776 | 1.2922 | 0.4219 |
| 1.0011 | 3.0 | 1164 | 1.4140 | 0.4477 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]}
|
edgartenorio/clasificador-muchocine
| null |
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:43:44+00:00
|
null | null |
{}
|
bobby-nakamoto/test-model-71
| null |
[
"region:us"
] | null |
2024-04-23T21:44:01+00:00
|
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3964
- Accuracy: 0.4310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3830 | 0.3729 |
| 1.4264 | 2.0 | 776 | 1.3051 | 0.4116 |
| 1.0769 | 3.0 | 1164 | 1.3964 | 0.4310 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]}
|
rednaxela8121/clasificador-muchocine
| null |
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:44:15+00:00
|
text-generation
|
transformers
|
# Quant Infos
## Includes latest bpe tokenizer fixes 🎉
- Updated for latest bpe pre-tokenizer fixes https://github.com/ggerganov/llama.cpp/pull/6920
- quants done with an importance matrix for improved quantization loss
- K & IQ quants in basically all variants from Q6_K down to IQ1_S
- fixed end token for instruct mode (<|eot_id|>[128009])
- Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [f4ab2a41476600a98067a9474ea8f9e6db41bcfa](https://github.com/ggerganov/llama.cpp/commit/f4ab2a41476600a98067a9474ea8f9e6db41bcfa) (master from 2024-04-29)
- Imatrtix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) dataset.
```
./imatrix -c 512 -m $model_name-f16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
```
# Original Model Card
## Model Summary
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ Phi-3 GGUF: [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat).
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
messages = [
{"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| | Phi-3-Mini-4K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 |
|---|---|---|---|---|---|---|---|---|---|
| MMLU <br>5-Shot | 68.8 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 |
| HellaSwag <br> 5-Shot | 76.7 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 |
| ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 |
| GSM-8K <br> 0-Shot; CoT | 82.5 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 |
| MedQA <br> 2-Shot | 53.8 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 |
| AGIEval <br> 0-Shot | 37.5 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 |
| TriviaQA <br> 5-Shot | 64.0 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 |
| Arc-C <br> 10-Shot | 84.9 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 |
| Arc-E <br> 10-Shot | 94.6 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 |
| PIQA <br> 5-Shot | 84.2 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 |
| SociQA <br> 5-Shot | 76.6 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 |
| BigBench-Hard <br> 0-Shot | 71.7 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 |
| WinoGrande <br> 5-Shot | 70.8 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 |
| OpenBookQA <br> 10-Shot | 83.2 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 |
| BoolQ <br> 0-Shot | 77.6 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 |
| CommonSenseQA <br> 10-Shot | 80.2 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 |
| TruthfulQA <br> 10-Shot | 65.0 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 |
| HumanEval <br> 0-Shot | 59.1 | 59.1 | 54.7 | 59.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 |
| MBPP <br> 3-Shot | 53.8 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 |
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
|
{"language": ["en"], "license": "mit", "tags": ["nlp", "code", "microsoft", "phi", "phi-3", "gguf", "imatrix", "importance matrix"], "base_model": "microsoft/Phi-3-mini-4k-instruct", "license_link": "LICENSE", "pipeline_tag": "text-generation"}
|
qwp4w3hyb/Phi-3-mini-4k-instruct-iMat-GGUF
| null |
[
"transformers",
"gguf",
"phi3",
"text-generation",
"nlp",
"code",
"microsoft",
"phi",
"phi-3",
"imatrix",
"importance matrix",
"conversational",
"custom_code",
"en",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:46:03+00:00
|
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [WesPro/PsykidelicLlama3](https://huggingface.co/WesPro/PsykidelicLlama3) + [mpasila/Llama-3-LimaRP-LoRA-8B](https://huggingface.co/mpasila/Llama-3-LimaRP-LoRA-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: WesPro/PsykidelicLlama3+mpasila/Llama-3-LimaRP-LoRA-8B
parameters:
weight: 1.0
merge_method: linear
dtype: float16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["WesPro/PsykidelicLlama3", "mpasila/Llama-3-LimaRP-LoRA-8B"]}
|
WesPro/PsyKidelic_Llama3_LimaRP
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:WesPro/PsykidelicLlama3",
"base_model:mpasila/Llama-3-LimaRP-LoRA-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T21:47:06+00:00
|
text-to-image
|
diffusers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "diffusers"}
|
rubbrband/asianBrmBeautyrealmix_v40
| null |
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null |
2024-04-23T21:48:49+00:00
|
null | null |
{}
|
mash01/llama-2-7B-32K-instruct-7209-web-articles-fine-tuned-fine-tuned-adapters
| null |
[
"region:us"
] | null |
2024-04-23T21:49:04+00:00
|
|
null | null |
{}
|
mash01/llama-2-7B-32K-instruct-7209-web-articles-fine-tuned-fine-tuned
| null |
[
"region:us"
] | null |
2024-04-23T21:49:48+00:00
|
|
text-generation
|
transformers
|
# Resharded
Resharded version of https://huggingface.co/tiiuae/falcon-7b-instruct for low RAM enviroments (e.g. Colab, Kaggle) in safetensors
Tutorial: https://medium.com/@vilsonrodrigues/run-your-private-llm-falcon-7b-instruct-with-less-than-6gb-of-gpu-using-4-bit-quantization-ff1d4ffbabcc
---
# ✨ Falcon-7B-Instruct
**Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.**
*Paper coming soon 😊.*
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-7B-Instruct?
* **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).**
* **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
⚠️ Falcon is now available as a core model in the `transformers` library! To use the in-library version, please install the latest version of `transformers` with `pip install git+https://github.com/ huggingface/transformers.git`, then simply remove the `trust_remote_code=True` argument from `from_pretrained()`.
💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
🔥 **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct.
# Model Card for Falcon-7B-Instruct
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0;
- **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.
| **Data source** | **Fraction** | **Tokens** | **Description** |
|--------------------|--------------|------------|-----------------------------------|
| [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat |
| [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct |
| [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct |
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
## Evaluation
*Paper coming soon.*
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
Note that this model variant is not optimized for NLP benchmarks.
## Technical Specifications
For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## License
Falcon-7B-Instruct is made available under the Apache 2.0 license.
## Contact
[email protected]
|
{"language": ["en"], "license": "apache-2.0", "datasets": ["tiiuae/falcon-refinedweb"], "inference": true, "widget": [{"text": "Hey Falcon! Any recommendations for my holidays in Abu Dhabi?", "example_title": "Abu Dhabi Trip"}, {"text": "What's the Everett interpretation of quantum mechanics?", "example_title": "Q/A: Quantum & Answers"}, {"text": "Give me a list of the top 10 dive sites you would recommend around the world.", "example_title": "Diving Top 10"}, {"text": "Can you tell me more about deep-water soloing?", "example_title": "Extreme sports"}, {"text": "Can you write a short tweet about the Apache 2.0 release of our latest AI model, Falcon LLM?", "example_title": "Twitter Helper"}, {"text": "What are the responsabilities of a Chief Llama Officer?", "example_title": "Trendy Jobs"}]}
|
droneinspector/droneinspector
| null |
[
"transformers",
"safetensors",
"falcon",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T21:50:13+00:00
|
null | null |
{"license": "openrail"}
|
TesterSet/benben
| null |
[
"license:openrail",
"region:us"
] | null |
2024-04-23T21:50:32+00:00
|
|
text-classification
|
transformers
|
{}
|
titanbot/Electra-Large-MRPC
| null |
[
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:50:37+00:00
|
|
null | null |
{}
|
siddhant-14/Weather
| null |
[
"region:us"
] | null |
2024-04-23T21:52:24+00:00
|
|
text-generation
|
transformers
|
base model = beomi/Llama-3-Open-Ko-8B-Instruct-preview
Dataset = hansoldeco domain own dataset
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
{}
|
sosoai/hansoldeco-beomi-Llama-3-Open-Ko-8B-Instruct-preview
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T21:52:36+00:00
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_4ep
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.1272
- eval_runtime: 2.8602
- eval_samples_per_second: 69.924
- eval_steps_per_second: 8.741
- epoch: 3.9936
- step: 312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1
|
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_4ep", "results": []}]}
|
mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_4ep
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null |
2024-04-23T21:54:27+00:00
|
text-generation
|
transformers
|
{}
|
ke-lly/45516626_0
| null |
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T21:55:03+00:00
|
|
null | null |
{}
|
mash01/llama-2-7b-fine-tuned
| null |
[
"region:us"
] | null |
2024-04-23T21:55:06+00:00
|
|
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"}
|
ahajahmed/Enlighten_Instruct
| null |
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null |
2024-04-23T21:56:17+00:00
|
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [ResplendentAI/Kei_Llama3_8B](https://huggingface.co/ResplendentAI/Kei_Llama3_8B) as a base.
### Models Merged
The following models were included in the merge:
* [cgato/L3-TheSpice-8b-v0.1.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.1.3)
* [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: cgato/L3-TheSpice-8b-v0.1.3
- model: Sao10K/L3-Solana-8B-v1
- model: ResplendentAI/Kei_Llama3_8B
merge_method: model_stock
base_model: ResplendentAI/Kei_Llama3_8B
dtype: float16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["cgato/L3-TheSpice-8b-v0.1.3", "Sao10K/L3-Solana-8B-v1", "ResplendentAI/Kei_Llama3_8B"]}
|
jeiku/Average_Normie_l3_v0_8B
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:cgato/L3-TheSpice-8b-v0.1.3",
"base_model:Sao10K/L3-Solana-8B-v1",
"base_model:ResplendentAI/Kei_Llama3_8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T21:56:31+00:00
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
Model is generated using STF + DPO on Mistral-7B as base model.
## Training Details
Mistral-7B was finetuned using SFT on a golf data that is in ChatML format.
Fine-tuned model was trained using DPO algorithm using Intel/orca_dpo_pairs in ChatML format.
### Training Procedure
Both trainings were performed using PEFT.
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
SFT parameters:
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=55,
save_strategy="no",
logging_steps=5,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=30,
fp16=True,
DPO parameters
beta=0.1,
loss_type= 'sigmoid' ,
max_prompt_length=512,
max_length=1024,
## Model Card Contact
https://huggingface.co/berkouille
|
{"library_name": "transformers", "tags": []}
|
berkouille/assistant_DPO_92
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T21:56:51+00:00
|
null |
diffusers
|
{}
|
tianyi0216/model4
| null |
[
"diffusers",
"safetensors",
"diffusers:StableDiffusionInstructPix2PixPipeline",
"region:us"
] | null |
2024-04-23T21:57:25+00:00
|
|
null | null |
[GGUF of https://huggingface.co/Orenguteng/Lexi-Llama-3-8B-Uncensored](https://huggingface.co/Orenguteng/Lexi-Llama-3-8B-Uncensored)

This model is based on Llama-3-8b-Instruct, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
Lexi is uncensored, which makes the model compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones.
You are responsible for any content you create using this model. Please use it responsibly.
Lexi is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license.
|
{"license": "other", "license_name": "license", "license_link": "https://huggingface.co/Orenguteng/Lexi-Llama-3-8B-Uncensored"}
|
Orenguteng/Llama-3-8B-Lexi-Uncensored-GGUF
| null |
[
"gguf",
"license:other",
"region:us"
] | null |
2024-04-23T21:57:52+00:00
|
image-segmentation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4429
- Mean Iou: 0.0127
- Mean Accuracy: 0.0289
- Overall Accuracy: 0.2813
- Accuracy Unlabeled: nan
- Accuracy Flat-road: 0.0012
- Accuracy Flat-sidewalk: 0.7342
- Accuracy Flat-crosswalk: 0.0
- Accuracy Flat-cyclinglane: 0.0
- Accuracy Flat-parkingdriveway: 0.0
- Accuracy Flat-railtrack: 0.0
- Accuracy Flat-curb: 0.0
- Accuracy Human-person: 0.0
- Accuracy Human-rider: 0.0
- Accuracy Vehicle-car: 0.0
- Accuracy Vehicle-truck: 0.0
- Accuracy Vehicle-bus: 0.0
- Accuracy Vehicle-tramtrain: 0.0
- Accuracy Vehicle-motorcycle: 0.0
- Accuracy Vehicle-bicycle: 0.0
- Accuracy Vehicle-caravan: 0.0
- Accuracy Vehicle-cartrailer: 0.0
- Accuracy Construction-building: 0.0538
- Accuracy Construction-door: 0.0
- Accuracy Construction-wall: 0.0
- Accuracy Construction-fenceguardrail: 0.0
- Accuracy Construction-bridge: 0.0
- Accuracy Construction-tunnel: 0.0
- Accuracy Construction-stairs: 0.0
- Accuracy Object-pole: 0.0
- Accuracy Object-trafficsign: 0.0
- Accuracy Object-trafficlight: 0.0
- Accuracy Nature-vegetation: 0.1770
- Accuracy Nature-terrain: 0.0
- Accuracy Sky: 0.0149
- Accuracy Void-ground: 0.0
- Accuracy Void-dynamic: 0.0
- Accuracy Void-static: 0.0
- Accuracy Void-unclear: 0.0
- Iou Unlabeled: nan
- Iou Flat-road: 0.0012
- Iou Flat-sidewalk: 0.3016
- Iou Flat-crosswalk: 0.0
- Iou Flat-cyclinglane: 0.0
- Iou Flat-parkingdriveway: 0.0
- Iou Flat-railtrack: 0.0
- Iou Flat-curb: 0.0
- Iou Human-person: 0.0
- Iou Human-rider: 0.0
- Iou Vehicle-car: 0.0
- Iou Vehicle-truck: 0.0
- Iou Vehicle-bus: 0.0
- Iou Vehicle-tramtrain: 0.0
- Iou Vehicle-motorcycle: 0.0
- Iou Vehicle-bicycle: 0.0
- Iou Vehicle-caravan: 0.0
- Iou Vehicle-cartrailer: 0.0
- Iou Construction-building: 0.0318
- Iou Construction-door: 0.0
- Iou Construction-wall: 0.0
- Iou Construction-fenceguardrail: 0.0
- Iou Construction-bridge: 0.0
- Iou Construction-tunnel: 0.0
- Iou Construction-stairs: 0.0
- Iou Object-pole: 0.0
- Iou Object-trafficsign: 0.0
- Iou Object-trafficlight: 0.0
- Iou Nature-vegetation: 0.0859
- Iou Nature-terrain: 0.0
- Iou Sky: 0.0108
- Iou Void-ground: 0.0
- Iou Void-dynamic: 0.0
- Iou Void-static: 0.0
- Iou Void-unclear: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Flat-road | Accuracy Flat-sidewalk | Accuracy Flat-crosswalk | Accuracy Flat-cyclinglane | Accuracy Flat-parkingdriveway | Accuracy Flat-railtrack | Accuracy Flat-curb | Accuracy Human-person | Accuracy Human-rider | Accuracy Vehicle-car | Accuracy Vehicle-truck | Accuracy Vehicle-bus | Accuracy Vehicle-tramtrain | Accuracy Vehicle-motorcycle | Accuracy Vehicle-bicycle | Accuracy Vehicle-caravan | Accuracy Vehicle-cartrailer | Accuracy Construction-building | Accuracy Construction-door | Accuracy Construction-wall | Accuracy Construction-fenceguardrail | Accuracy Construction-bridge | Accuracy Construction-tunnel | Accuracy Construction-stairs | Accuracy Object-pole | Accuracy Object-trafficsign | Accuracy Object-trafficlight | Accuracy Nature-vegetation | Accuracy Nature-terrain | Accuracy Sky | Accuracy Void-ground | Accuracy Void-dynamic | Accuracy Void-static | Accuracy Void-unclear | Iou Unlabeled | Iou Flat-road | Iou Flat-sidewalk | Iou Flat-crosswalk | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-curb | Iou Human-person | Iou Human-rider | Iou Vehicle-car | Iou Vehicle-truck | Iou Vehicle-bus | Iou Vehicle-tramtrain | Iou Vehicle-motorcycle | Iou Vehicle-bicycle | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Construction-building | Iou Construction-door | Iou Construction-wall | Iou Construction-fenceguardrail | Iou Construction-bridge | Iou Construction-tunnel | Iou Construction-stairs | Iou Object-pole | Iou Object-trafficsign | Iou Object-trafficlight | Iou Nature-vegetation | Iou Nature-terrain | Iou Sky | Iou Void-ground | Iou Void-dynamic | Iou Void-static | Iou Void-unclear |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:------------------:|:----------------------:|:-----------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:---------------------:|:--------------------:|:--------------------:|:----------------------:|:--------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------:|:---------------------------:|:------------------------------:|:--------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:--------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------:|:--------------------:|:---------------------:|:--------------------:|:---------------------:|:-------------:|:-------------:|:-----------------:|:------------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:----------------:|:---------------:|:---------------:|:-----------------:|:---------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------:|:----------------------:|:-------------------------:|:---------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:---------------:|:----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------:|:---------------:|:----------------:|:---------------:|:----------------:|
| 3.5256 | 0.2 | 10 | 3.5147 | 0.0071 | 0.0401 | 0.1017 | nan | 0.0000 | 0.2861 | 0.0000 | 0.0000 | 0.0402 | 0.0 | 0.0011 | 0.0017 | 0.0 | 0.0035 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0215 | 0.0236 | 0.0002 | 0.0010 | 0.0 | 0.0053 | 0.0162 | 0.0020 | 0.5432 | 0.0000 | 0.0815 | 0.0166 | 0.0172 | 0.0010 | 0.0000 | 0.0028 | 0.2620 | 0.0002 | 0.0060 | 0.0294 | 0.0 | 0.0000 | 0.1889 | 0.0000 | 0.0000 | 0.0173 | 0.0 | 0.0011 | 0.0005 | 0.0 | 0.0029 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0026 | 0.0001 | 0.0000 | 0.0010 | 0.0 | 0.0038 | 0.0064 | 0.0001 | 0.0000 | 0.0000 | 0.0077 | 0.0010 | 0.0000 | 0.0010 | 0.0000 | 0.0027 | 0.0051 | 0.0002 | 0.0049 | 0.0002 |
| 3.3115 | 0.4 | 20 | 3.4349 | 0.0090 | 0.0293 | 0.1597 | nan | 0.0001 | 0.4536 | 0.0 | 0.0 | 0.0642 | 0.0 | 0.0002 | 0.0009 | 0.0 | 0.0008 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0075 | 0.0213 | 0.0 | 0.0009 | 0.0 | 0.0059 | 0.0046 | 0.0 | 0.0782 | 0.0 | 0.1300 | 0.0102 | 0.0231 | 0.0046 | 0.0000 | 0.0016 | 0.1731 | 0.0 | 0.0044 | 0.0114 | nan | 0.0001 | 0.2507 | 0.0 | 0.0 | 0.0202 | 0.0 | 0.0002 | 0.0004 | 0.0 | 0.0008 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0021 | 0.0001 | 0.0 | 0.0009 | 0.0 | 0.0038 | 0.0030 | 0.0 | 0.0000 | 0.0 | 0.0080 | 0.0012 | 0.0000 | 0.0044 | 0.0000 | 0.0016 | 0.0050 | 0.0 | 0.0038 | 0.0002 |
| 2.8003 | 0.6 | 30 | 3.3730 | 0.0087 | 0.0281 | 0.1245 | nan | 0.0054 | 0.3314 | 0.0000 | 0.0000 | 0.1290 | 0.0 | 0.0003 | 0.0031 | 0.0 | 0.0004 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0023 | 0.0092 | 0.0 | 0.0011 | 0.0 | 0.0233 | 0.0022 | 0.0 | 0.0 | 0.0 | 0.1795 | 0.0126 | 0.0010 | 0.0239 | 0.0002 | 0.0046 | 0.2127 | 0.0 | 0.0075 | 0.0060 | nan | 0.0052 | 0.2092 | 0.0000 | 0.0000 | 0.0244 | 0.0 | 0.0003 | 0.0009 | 0.0 | 0.0004 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0011 | 0.0001 | 0.0 | 0.0011 | 0.0 | 0.0078 | 0.0017 | 0.0 | 0.0 | 0.0 | 0.0079 | 0.0013 | 0.0000 | 0.0203 | 0.0002 | 0.0041 | 0.0050 | 0.0 | 0.0054 | 0.0004 |
| 3.2521 | 0.8 | 40 | 3.2736 | 0.0110 | 0.0292 | 0.1863 | nan | 0.0294 | 0.5083 | 0.0001 | 0.0001 | 0.1290 | 0.0 | 0.0004 | 0.0012 | 0.0 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0015 | 0.0016 | 0.0 | 0.0016 | 0.0 | 0.0127 | 0.0038 | 0.0 | 0.0 | 0.0000 | 0.1112 | 0.0033 | 0.0 | 0.0218 | 0.0002 | 0.0042 | 0.1371 | 0.0 | 0.0163 | 0.0076 | nan | 0.0243 | 0.2651 | 0.0001 | 0.0001 | 0.0253 | 0.0 | 0.0004 | 0.0005 | 0.0 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0009 | 0.0001 | 0.0 | 0.0016 | 0.0 | 0.0058 | 0.0028 | 0.0 | 0.0 | 0.0000 | 0.0080 | 0.0011 | 0.0 | 0.0185 | 0.0002 | 0.0037 | 0.0049 | 0.0 | 0.0083 | 0.0011 |
| 2.9043 | 1.0 | 50 | 3.2220 | 0.0132 | 0.0291 | 0.1739 | nan | 0.1252 | 0.3934 | 0.0003 | 0.0003 | 0.1066 | 0.0 | 0.0063 | 0.0008 | 0.0 | 0.0068 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.0075 | 0.0 | 0.0098 | 0.0044 | 0.0 | 0.0 | 0.0 | 0.0582 | 0.0006 | 0.0 | 0.1309 | 0.0006 | 0.0081 | 0.1208 | 0.0 | 0.0094 | 0.0001 | nan | 0.0664 | 0.2317 | 0.0003 | 0.0003 | 0.0241 | 0.0 | 0.0053 | 0.0004 | 0.0 | 0.0057 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0005 | 0.0 | 0.0 | 0.0068 | 0.0 | 0.0052 | 0.0033 | 0.0 | 0.0 | 0.0 | 0.0085 | 0.0004 | 0.0 | 0.0710 | 0.0006 | 0.0064 | 0.0046 | 0.0 | 0.0061 | 0.0000 |
| 2.8893 | 1.2 | 60 | 3.1323 | 0.0128 | 0.0301 | 0.1824 | nan | 0.1147 | 0.2779 | 0.0000 | 0.0002 | 0.0638 | 0.0 | 0.0002 | 0.0001 | 0.0 | 0.0066 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0111 | 0.0 | 0.0017 | 0.0009 | 0.0 | 0.0 | 0.0 | 0.0052 | 0.0000 | 0.0 | 0.4865 | 0.0005 | 0.0062 | 0.0445 | 0.0 | 0.0025 | 0.0 | nan | 0.0637 | 0.1900 | 0.0000 | 0.0002 | 0.0202 | 0.0 | 0.0002 | 0.0001 | 0.0 | 0.0052 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0091 | 0.0 | 0.0014 | 0.0009 | 0.0 | 0.0 | 0.0 | 0.0033 | 0.0000 | 0.0 | 0.1301 | 0.0005 | 0.0050 | 0.0041 | 0.0 | 0.0022 | 0.0 |
| 2.8221 | 1.4 | 70 | 3.0049 | 0.0138 | 0.0298 | 0.2481 | nan | 0.0664 | 0.5578 | 0.0 | 0.0001 | 0.0184 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0015 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0101 | 0.0 | 0.0002 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.3322 | 0.0001 | 0.0147 | 0.0097 | 0.0 | 0.0000 | 0.0 | nan | 0.0443 | 0.2727 | 0.0 | 0.0001 | 0.0109 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0014 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0084 | 0.0 | 0.0002 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.1168 | 0.0001 | 0.0102 | 0.0032 | 0.0 | 0.0000 | 0.0 |
| 2.7321 | 1.6 | 80 | 2.9281 | 0.0129 | 0.0300 | 0.2121 | nan | 0.1000 | 0.3599 | 0.0 | 0.0 | 0.0076 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0022 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0172 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.5224 | 0.0 | 0.0077 | 0.0040 | 0.0 | 0.0 | 0.0 | nan | 0.0577 | 0.2179 | 0.0 | 0.0 | 0.0056 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0019 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0126 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.1335 | 0.0 | 0.0065 | 0.0028 | 0.0 | 0.0 | 0.0 |
| 2.7583 | 1.8 | 90 | 2.9182 | 0.0107 | 0.0303 | 0.1746 | nan | 0.1465 | 0.1641 | 0.0 | 0.0 | 0.0036 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0006 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0123 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.6905 | 0.0 | 0.0102 | 0.0008 | 0.0 | 0.0 | 0.0 | nan | 0.0714 | 0.1297 | 0.0 | 0.0 | 0.0030 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0006 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0095 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.1398 | 0.0 | 0.0084 | 0.0007 | 0.0 | 0.0 | 0.0 |
| 3.1177 | 2.0 | 100 | 2.9230 | 0.0138 | 0.0297 | 0.2272 | nan | 0.1294 | 0.4556 | 0.0 | 0.0000 | 0.0030 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0009 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0299 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3671 | 0.0 | 0.0219 | 0.0004 | 0.0 | 0.0 | 0.0 | nan | 0.0662 | 0.2463 | 0.0 | 0.0000 | 0.0026 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0009 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0196 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1193 | 0.0 | 0.0149 | 0.0004 | 0.0 | 0.0 | 0.0 |
| 3.041 | 2.2 | 110 | 2.8124 | 0.0138 | 0.0291 | 0.2549 | nan | 0.1402 | 0.6049 | 0.0 | 0.0000 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0005 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0363 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1992 | 0.0 | 0.0075 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0683 | 0.2797 | 0.0 | 0.0000 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0005 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0234 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0921 | 0.0 | 0.0062 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1549 | 2.4 | 120 | 2.7993 | 0.0132 | 0.0292 | 0.2105 | nan | 0.1463 | 0.3812 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0572 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4022 | 0.0 | 0.0061 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0692 | 0.2227 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0301 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1223 | 0.0 | 0.0053 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.7506 | 2.6 | 130 | 2.7869 | 0.0136 | 0.0290 | 0.2153 | nan | 0.1198 | 0.4194 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0626 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3578 | 0.0 | 0.0272 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0628 | 0.2315 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0319 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1161 | 0.0 | 0.0191 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.8666 | 2.8 | 140 | 2.7030 | 0.0133 | 0.0288 | 0.2546 | nan | 0.0626 | 0.5989 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0378 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2736 | 0.0 | 0.0047 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0417 | 0.2753 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0239 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1059 | 0.0 | 0.0041 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.3693 | 3.0 | 150 | 2.6758 | 0.0133 | 0.0289 | 0.2790 | nan | 0.0661 | 0.7211 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0304 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1548 | 0.0 | 0.0089 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0432 | 0.3002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0211 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0808 | 0.0 | 0.0071 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.4211 | 3.2 | 160 | 2.6509 | 0.0122 | 0.0292 | 0.3118 | nan | 0.0340 | 0.8762 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0157 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0493 | 0.0 | 0.0169 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0270 | 0.3255 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0129 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0375 | 0.0 | 0.0117 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2934 | 3.4 | 170 | 2.5811 | 0.0109 | 0.0290 | 0.3268 | nan | 0.0162 | 0.9439 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0104 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0148 | 0.0 | 0.0021 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0145 | 0.3322 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0093 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0131 | 0.0 | 0.0019 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2474 | 3.6 | 180 | 2.6740 | 0.0122 | 0.0287 | 0.3000 | nan | 0.0201 | 0.8363 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0013 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0619 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0461 | 0.0 | 0.0089 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0170 | 0.3185 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0013 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0342 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0354 | 0.0 | 0.0072 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.4543 | 3.8 | 190 | 2.5741 | 0.0115 | 0.0287 | 0.3111 | nan | 0.0113 | 0.8837 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0006 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0529 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0263 | 0.0 | 0.0014 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0103 | 0.3246 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0006 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0317 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0219 | 0.0 | 0.0014 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.6415 | 4.0 | 200 | 2.4955 | 0.0114 | 0.0287 | 0.3121 | nan | 0.0075 | 0.8862 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0495 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0328 | 0.0 | 0.0005 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0070 | 0.3248 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0302 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0266 | 0.0 | 0.0005 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.3359 | 4.2 | 210 | 2.6535 | 0.0130 | 0.0280 | 0.2474 | nan | 0.0389 | 0.6235 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1633 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1048 | 0.0 | 0.0211 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0285 | 0.2807 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0537 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0632 | 0.0 | 0.0142 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.8133 | 4.4 | 220 | 2.6000 | 0.0133 | 0.0285 | 0.2609 | nan | 0.0401 | 0.6643 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1069 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1350 | 0.0 | 0.0210 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0291 | 0.2901 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0447 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0749 | 0.0 | 0.0147 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.3126 | 4.6 | 230 | 2.6429 | 0.0126 | 0.0288 | 0.1857 | nan | 0.0332 | 0.3374 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2814 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2779 | 0.0 | 0.0480 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0255 | 0.2045 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0639 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1067 | 0.0 | 0.0288 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2695 | 4.8 | 240 | 2.5140 | 0.0128 | 0.0282 | 0.2399 | nan | 0.0217 | 0.5869 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2003 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1312 | 0.0 | 0.0183 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0177 | 0.2729 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0574 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0738 | 0.0 | 0.0145 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.0622 | 5.0 | 250 | 2.4634 | 0.0126 | 0.0283 | 0.2656 | nan | 0.0107 | 0.6862 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1332 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1278 | 0.0 | 0.0058 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0097 | 0.2918 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0491 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0736 | 0.0 | 0.0052 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1988 | 5.2 | 260 | 2.5162 | 0.0125 | 0.0282 | 0.2209 | nan | 0.0083 | 0.5152 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2606 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1382 | 0.0 | 0.0379 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0077 | 0.2553 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0621 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0760 | 0.0 | 0.0236 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.4214 | 5.4 | 270 | 2.5880 | 0.0122 | 0.0284 | 0.1888 | nan | 0.0134 | 0.3772 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3344 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1887 | 0.0 | 0.0516 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0117 | 0.2176 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0671 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0892 | 0.0 | 0.0279 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2255 | 5.6 | 280 | 2.4963 | 0.0127 | 0.0287 | 0.2732 | nan | 0.0126 | 0.7299 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1341 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0689 | 0.0 | 0.0301 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0113 | 0.3024 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0513 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0476 | 0.0 | 0.0182 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.3459 | 5.8 | 290 | 2.5055 | 0.0131 | 0.0288 | 0.2638 | nan | 0.0133 | 0.6801 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1239 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1258 | 0.0 | 0.0347 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0118 | 0.2933 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0489 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0718 | 0.0 | 0.0198 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1034 | 6.0 | 300 | 2.4549 | 0.0125 | 0.0288 | 0.2873 | nan | 0.0048 | 0.7776 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0929 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0897 | 0.0 | 0.0143 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0046 | 0.3101 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0430 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0581 | 0.0 | 0.0107 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2193 | 6.2 | 310 | 2.4227 | 0.0126 | 0.0290 | 0.2879 | nan | 0.0013 | 0.7619 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0746 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1482 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0013 | 0.3070 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0379 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0810 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.3808 | 6.4 | 320 | 2.4239 | 0.0124 | 0.0290 | 0.2926 | nan | 0.0006 | 0.7900 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0797 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1109 | 0.0 | 0.0031 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0006 | 0.3122 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0397 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0675 | 0.0 | 0.0028 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1201 | 6.6 | 330 | 2.4546 | 0.0130 | 0.0292 | 0.2795 | nan | 0.0010 | 0.7295 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0903 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1522 | 0.0 | 0.0186 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0010 | 0.3036 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0422 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0817 | 0.0 | 0.0131 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1429 | 6.8 | 340 | 2.4390 | 0.0121 | 0.0292 | 0.3077 | nan | 0.0004 | 0.8612 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0502 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0618 | 0.0 | 0.0185 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0004 | 0.3245 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0314 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0446 | 0.0 | 0.0122 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.3745 | 7.0 | 350 | 2.4814 | 0.0132 | 0.0292 | 0.2555 | nan | 0.0020 | 0.6392 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0911 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1816 | 0.0 | 0.0800 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0020 | 0.2865 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0422 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0881 | 0.0 | 0.0287 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1907 | 7.2 | 360 | 2.4901 | 0.0130 | 0.0290 | 0.2387 | nan | 0.0014 | 0.5526 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1063 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2669 | 0.0 | 0.0588 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0014 | 0.2661 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0432 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1055 | 0.0 | 0.0274 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1116 | 7.4 | 370 | 2.4841 | 0.0130 | 0.0290 | 0.2350 | nan | 0.0015 | 0.5323 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0908 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2968 | 0.0 | 0.0659 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0014 | 0.2612 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0397 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1097 | 0.0 | 0.0284 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.4808 | 7.6 | 380 | 2.4890 | 0.0129 | 0.0293 | 0.2376 | nan | 0.0025 | 0.5715 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0758 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2136 | 0.0 | 0.1314 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0024 | 0.2729 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0372 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0958 | 0.0 | 0.0319 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.8601 | 7.8 | 390 | 2.5003 | 0.0128 | 0.0290 | 0.2250 | nan | 0.0022 | 0.4998 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0898 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2944 | 0.0 | 0.1015 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0022 | 0.2538 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0393 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1094 | 0.0 | 0.0313 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.032 | 8.0 | 400 | 2.5240 | 0.0125 | 0.0289 | 0.2093 | nan | 0.0027 | 0.4406 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1033 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3108 | 0.0 | 0.1262 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0026 | 0.2379 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0422 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1106 | 0.0 | 0.0326 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.9364 | 8.2 | 410 | 2.4666 | 0.0127 | 0.0292 | 0.2720 | nan | 0.0024 | 0.7293 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0661 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0924 | 0.0 | 0.1028 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0023 | 0.3046 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0371 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0590 | 0.0 | 0.0282 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.0335 | 8.4 | 420 | 2.4894 | 0.0129 | 0.0292 | 0.2402 | nan | 0.0046 | 0.5965 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0783 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1650 | 0.0 | 0.1478 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0044 | 0.2787 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0400 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0831 | 0.0 | 0.0315 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.0622 | 8.6 | 430 | 2.5457 | 0.0121 | 0.0287 | 0.1888 | nan | 0.0038 | 0.3645 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1536 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3132 | 0.0 | 0.1396 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0037 | 0.2129 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0500 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1108 | 0.0 | 0.0338 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.9635 | 8.8 | 440 | 2.5416 | 0.0120 | 0.0287 | 0.1908 | nan | 0.0028 | 0.4200 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1216 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1900 | 0.0 | 0.2427 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0027 | 0.2335 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0465 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0899 | 0.0 | 0.0340 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.9328 | 9.0 | 450 | 2.4707 | 0.0128 | 0.0293 | 0.2609 | nan | 0.0024 | 0.6792 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0528 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1358 | 0.0 | 0.1274 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0024 | 0.2958 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0327 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0750 | 0.0 | 0.0299 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.0373 | 9.2 | 460 | 2.5003 | 0.0128 | 0.0292 | 0.2294 | nan | 0.0028 | 0.5341 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0638 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2482 | 0.0 | 0.1447 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0028 | 0.2641 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0349 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1011 | 0.0 | 0.0315 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2552 | 9.4 | 470 | 2.4884 | 0.0130 | 0.0292 | 0.2400 | nan | 0.0020 | 0.5674 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0689 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2513 | 0.0 | 0.1038 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0020 | 0.2712 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0365 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1016 | 0.0 | 0.0296 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.956 | 9.6 | 480 | 2.5214 | 0.0126 | 0.0289 | 0.2153 | nan | 0.0034 | 0.4825 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1038 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2477 | 0.0 | 0.1458 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0033 | 0.2501 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0445 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1003 | 0.0 | 0.0320 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1743 | 9.8 | 490 | 2.4624 | 0.0127 | 0.0289 | 0.2689 | nan | 0.0018 | 0.7146 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0769 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1041 | 0.0 | 0.0848 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0018 | 0.3001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0402 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0633 | 0.0 | 0.0261 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.0282 | 10.0 | 500 | 2.4429 | 0.0127 | 0.0289 | 0.2813 | nan | 0.0012 | 0.7342 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0538 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1770 | 0.0 | 0.0149 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0012 | 0.3016 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0318 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0859 | 0.0 | 0.0108 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "other", "tags": ["vision", "image-segmentation", "generated_from_trainer"], "base_model": "nvidia/mit-b0", "model-index": [{"name": "segformer-b0-finetuned-segments-sidewalk-2", "results": []}]}
|
karthik540/segformer-b0-finetuned-segments-sidewalk-2
| null |
[
"transformers",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:58:04+00:00
|
token-classification
|
transformers
|
{}
|
titanbot/Electra-Large-CONLL2003
| null |
[
"transformers",
"pytorch",
"electra",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T21:59:08+00:00
|
|
text-generation
|
transformers
|
{"license": "apache-2.0"}
|
Adityyaa/Mistral-7b_finetuned_mental_health
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-23T21:59:14+00:00
|
|
text-generation
| null |
## Usage
Package installation
```
pip install llama-cpp-python "huggingface_hub[cli]"
```
Download the model:
```
huggingface-cli download sourabhdattawad/meta-llama-3-8b-instruct-gguf meta-llama-3-8b-instruct.Q8_0.gguf --local-dir . --local-dir-use-symlinks False
```
```Python
from llama_cpp import Llama
llm = Llama(
model_path="meta-llama-3-8b-instruct.Q8_0.gguf",
# n_gpu_layers=-1, # Uncomment to use GPU acceleration
# seed=1337, # Uncomment to set a specific seed
# n_ctx=2048, # Uncomment to increase the context window
)
output = llm(
"Q: Name the planets in the solar system? A: ", # Prompt
max_tokens=50, # Generate up to 50 tokens, set to None to generate up to the end of the context window
stop=["Q:", "\n"], # Stop generating just before the model would generate a new question
echo=True # Echo the prompt back in the output
)
output
```
```
Llama.generate: prefix-match hit
llama_print_timings: load time = 7770.49 ms
llama_print_timings: sample time = 100.16 ms / 40 runs ( 2.50 ms per token, 399.35 tokens per second)
llama_print_timings: prompt eval time = 0.00 ms / 1 tokens ( 0.00 ms per token, inf tokens per second)
llama_print_timings: eval time = 35214.73 ms / 40 runs ( 880.37 ms per token, 1.14 tokens per second)
llama_print_timings: total time = 35895.91 ms / 41 tokens
{'id': 'cmpl-01e2feb3-c0ff-4a6e-8ca4-b8bf2172da01',
'object': 'text_completion',
'created': 1713912080,
'model': 'meta-llama-3-8b-instruct.Q8_0.gguf',
'choices': [{'text': 'Q: Name the planets in the solar system? A: 1. Mercury, 2. Venus, 3. Earth, 4. Mars, 5. Jupiter, 6. Saturn, 7. Uranus, 8. Neptune.',
'index': 0,
'logprobs': None,
'finish_reason': 'stop'}],
'usage': {'prompt_tokens': 13, 'completion_tokens': 40, 'total_tokens': 53}}
```
## Google Colab
[https://colab.research.google.com/drive/1vhrCKGzY7KP5mScHNUl7hjmbPsUyj_sj?usp=sharing)](https://colab.research.google.com/drive/1vhrCKGzY7KP5mScHNUl7hjmbPsUyj_sj?usp=sharing)
|
{"language": ["en"], "tags": ["meta", "pytorch", "llama", "llama-3", "llama-cpp", "quantized", "8-bit", "GGUF", "8 Billion", "python", "instruct", "google-colab"], "model_name": "meta-llama-3-8B-instruct-gguf", "pipeline_tag": "text-generation", "inference": false, "model_creator": "sourabhdattawad", "quantized_by": "sourabhdattawad", "license_name": "llama3"}
|
sourabhdattawad/meta-llama-3-8b-instruct-gguf
| null |
[
"gguf",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"quantized",
"8-bit",
"GGUF",
"8 Billion",
"python",
"instruct",
"google-colab",
"text-generation",
"en",
"region:us"
] | null |
2024-04-23T21:59:32+00:00
|
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
{"library_name": "peft", "base_model": "Universal-NER/UniNER-7B-type"}
|
jc80622/unilora_test
| null |
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Universal-NER/UniNER-7B-type",
"region:us"
] | null |
2024-04-23T22:03:23+00:00
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_5ep
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.1177
- eval_runtime: 2.8618
- eval_samples_per_second: 69.887
- eval_steps_per_second: 8.736
- epoch: 4.992
- step: 390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1
|
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_5ep", "results": []}]}
|
mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_5ep
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null |
2024-04-23T22:03:42+00:00
|
null | null |
# T3qm7xpPercival_01-7B
T3qm7xpPercival_01-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: nlpguy/T3QM7XP
- model: AurelPx/Percival_01-7b-slerp
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/T3qm7xpPercival_01-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]}
|
automerger/T3qm7xpPercival_01-7B
| null |
[
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null |
2024-04-23T22:03:44+00:00
|
text-classification
|
transformers
|
{}
|
scott-routledge/bert-question-classifier
| null |
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:03:51+00:00
|
|
text-generation
|
transformers
|
# Average Normie v1

A model by an average normie for the average normie.
This model is a stock merge of the following models:
https://huggingface.co/cgato/L3-TheSpice-8b-v0.1.3
https://huggingface.co/Sao10K/L3-Solana-8B-v1
https://huggingface.co/ResplendentAI/Kei_Llama3_8B
The final merge then had the following LoRA applied over it:
https://huggingface.co/ResplendentAI/Theory_of_Mind_Llama3
This should be an intelligent and adept roleplaying model.
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["grimulkan/theory-of-mind"], "base_model": ["jeiku/Average_Normie_l3_v0_8B", "ResplendentAI/Theory_of_Mind_Llama3"]}
|
jeiku/Average_Normie_l3_v1_8B
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:grimulkan/theory-of-mind",
"base_model:jeiku/Average_Normie_l3_v0_8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T22:04:24+00:00
|
null | null |
{}
|
TMTrix/weathrer
| null |
[
"region:us"
] | null |
2024-04-23T22:04:40+00:00
|
|
null | null |
{"license": "mit"}
|
C0d3h4CK3R/gpt-bible
| null |
[
"license:mit",
"region:us"
] | null |
2024-04-23T22:06:09+00:00
|
|
text-generation
|
transformers
|
# Model Card for Model ID
Multilingual fine tuned version of LLAMA-3-8B quantized in 4 bits.
## Model Details
### Model Description
Multilingual fine tuned version of LLAMA-3-8B quantized in 4 bits using common open source datasets and showing improvements over multilingual tasks.
It has been used the standard bitquantized technique for post-fine-tuning quantization reducing the computational time complexity and space complexity required to run the model. The overall architecture it's all LLAMA-3 based.
- **Developed by:** Daniele Comi
- **Model type:** LLAMA-3-8B
- **Language(s) (NLP):** Multilingual
- **License:** MIT
- **Finetuned from model:** LLAMA-3-8B
|
{"language": ["it", "en"], "license": "mit", "library_name": "transformers"}
|
comidan/llama-3-chat-multilingual-v1-8b
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"it",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-23T22:06:48+00:00
|
text-generation
|
transformers
|
# Uploaded model
- **Developed by:** Mbetyko
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
|
Mbetyko/basket
| null |
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:07:20+00:00
|
null | null |
{"license": "apache-2.0"}
|
kazooryuryu/Lee.Heeseung
| null |
[
"doi:10.57967/hf/2118",
"license:apache-2.0",
"region:us"
] | null |
2024-04-23T22:07:47+00:00
|
|
text-classification
|
transformers
|
{}
|
titanbot/Electra-Large-RTE
| null |
[
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:07:50+00:00
|
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CNEC_1_1_robeczech-base
This model is a fine-tuned version of [ufal/robeczech-base](https://huggingface.co/ufal/robeczech-base) on the cnec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3233
- Precision: 0.8580
- Recall: 0.8857
- F1: 0.8716
- Accuracy: 0.9511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3724 | 3.41 | 2000 | 0.3332 | 0.7990 | 0.8230 | 0.8108 | 0.9376 |
| 0.1863 | 6.81 | 4000 | 0.2656 | 0.8515 | 0.8636 | 0.8575 | 0.9455 |
| 0.1109 | 10.22 | 6000 | 0.2575 | 0.8505 | 0.8737 | 0.8619 | 0.9493 |
| 0.068 | 13.63 | 8000 | 0.2804 | 0.8567 | 0.8790 | 0.8677 | 0.9503 |
| 0.0466 | 17.04 | 10000 | 0.2952 | 0.8573 | 0.8830 | 0.8699 | 0.9498 |
| 0.0305 | 20.44 | 12000 | 0.2992 | 0.8618 | 0.8865 | 0.8740 | 0.9520 |
| 0.0231 | 23.85 | 14000 | 0.3272 | 0.8567 | 0.8843 | 0.8703 | 0.9512 |
| 0.02 | 27.26 | 16000 | 0.3233 | 0.8580 | 0.8857 | 0.8716 | 0.9511 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
{"license": "cc-by-nc-sa-4.0", "tags": ["generated_from_trainer"], "datasets": ["cnec"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "ufal/robeczech-base", "model-index": [{"name": "CNEC_1_1_robeczech-base", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "cnec", "type": "cnec", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "precision", "value": 0.8579982891360137, "name": "Precision"}, {"type": "recall", "value": 0.8856512141280353, "name": "Recall"}, {"type": "f1", "value": 0.8716054746904193, "name": "F1"}, {"type": "accuracy", "value": 0.9511284046692607, "name": "Accuracy"}]}]}]}
|
stulcrad/CNEC_1_1_robeczech-base
| null |
[
"transformers",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:cnec",
"base_model:ufal/robeczech-base",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:08:33+00:00
|
reinforcement-learning
|
stable-baselines3
|
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "261.99 +/- 15.60", "name": "mean_reward", "verified": false}]}]}]}
|
volverine/ppo-LunarLander-v2
| null |
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-23T22:09:19+00:00
|
text-classification
|
transformers
|
{}
|
greasyFinger/chinese_xl
| null |
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:09:24+00:00
|
|
null | null |
Fine-tuned model for generating research papers with Mistral 7B 0.1. Fine-tuned on arXiv documents collected by scraping with the help of the arXiv API.
Will add a longer description later on.
Will add a longer description later on.
Will add a longer description later on.
|
{"license": "apache-2.0"}
|
dpetrou00/mistral-paper-generator
| null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null |
2024-04-23T22:10:07+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CNEC_2_0_robeczech-base
This model is a fine-tuned version of [ufal/robeczech-base](https://huggingface.co/ufal/robeczech-base) on the cnec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3306
- Precision: 0.8531
- Recall: 0.8848
- F1: 0.8687
- Accuracy: 0.9545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4499 | 2.22 | 2000 | 0.3871 | 0.7163 | 0.7099 | 0.7131 | 0.9222 |
| 0.2342 | 4.44 | 4000 | 0.2576 | 0.8149 | 0.8251 | 0.8200 | 0.9451 |
| 0.1449 | 6.67 | 6000 | 0.2407 | 0.8231 | 0.8523 | 0.8375 | 0.9492 |
| 0.1027 | 8.89 | 8000 | 0.2267 | 0.8362 | 0.8748 | 0.8551 | 0.9527 |
| 0.0751 | 11.11 | 10000 | 0.2429 | 0.8394 | 0.8712 | 0.8550 | 0.9522 |
| 0.0473 | 13.33 | 12000 | 0.2633 | 0.8439 | 0.8720 | 0.8577 | 0.9535 |
| 0.0369 | 15.56 | 14000 | 0.2821 | 0.8468 | 0.8755 | 0.8609 | 0.9541 |
| 0.0286 | 17.78 | 16000 | 0.2797 | 0.8534 | 0.8827 | 0.8678 | 0.9558 |
| 0.0234 | 20.0 | 18000 | 0.2860 | 0.8550 | 0.8834 | 0.8690 | 0.9558 |
| 0.0168 | 22.22 | 20000 | 0.3146 | 0.8471 | 0.8795 | 0.8630 | 0.9531 |
| 0.0142 | 24.44 | 22000 | 0.3165 | 0.8488 | 0.8816 | 0.8649 | 0.9530 |
| 0.011 | 26.67 | 24000 | 0.3291 | 0.8518 | 0.8816 | 0.8664 | 0.9537 |
| 0.0092 | 28.89 | 26000 | 0.3306 | 0.8531 | 0.8848 | 0.8687 | 0.9545 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
{"license": "cc-by-nc-sa-4.0", "tags": ["generated_from_trainer"], "datasets": ["cnec"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "ufal/robeczech-base", "model-index": [{"name": "CNEC_2_0_robeczech-base", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "cnec", "type": "cnec", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "precision", "value": 0.853103448275862, "name": "Precision"}, {"type": "recall", "value": 0.8848354792560801, "name": "Recall"}, {"type": "f1", "value": 0.8686797752808989, "name": "F1"}, {"type": "accuracy", "value": 0.954457738324971, "name": "Accuracy"}]}]}]}
|
stulcrad/CNEC_2_0_robeczech-base
| null |
[
"transformers",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:cnec",
"base_model:ufal/robeczech-base",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:10:33+00:00
|
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
FranchRamp/bert-finetuned-ner4
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:12:26+00:00
|
null | null |
{}
|
Amit7Singh/videomae-base-finetuned_on_SSBD
| null |
[
"region:us"
] | null |
2024-04-23T22:13:10+00:00
|
|
text-generation
|
transformers
|
# meta-llama/Meta-Llama-3-8B AWQ
- Model creator: [meta-llama](https://huggingface.co/meta-llama)
- Original model: [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/Meta-Llama-3-8B-AWQ"
system_message = "You are Meta-Llama-3-8B, incarnated as a powerful AI. You were created by meta-llama."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
|
{"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
|
solidrust/Meta-Llama-3-8B-AWQ
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T22:13:30+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned__roberta-base-biomedical-clinical-es__augmented-ultrasounds-ner
This model is a fine-tuned version of [manucos/finetuned__roberta-base-biomedical-clinical-es__augmented-ultrasounds](https://huggingface.co/manucos/finetuned__roberta-base-biomedical-clinical-es__augmented-ultrasounds) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3648
- Precision: 0.8205
- Recall: 0.8927
- F1: 0.8551
- Accuracy: 0.9264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 206 | 0.2948 | 0.7527 | 0.8411 | 0.7945 | 0.9132 |
| No log | 2.0 | 412 | 0.2572 | 0.7746 | 0.8522 | 0.8116 | 0.9235 |
| 0.4194 | 3.0 | 618 | 0.2866 | 0.7759 | 0.8482 | 0.8104 | 0.9215 |
| 0.4194 | 4.0 | 824 | 0.2813 | 0.7878 | 0.8866 | 0.8343 | 0.9235 |
| 0.0971 | 5.0 | 1030 | 0.2902 | 0.7969 | 0.8856 | 0.8389 | 0.9249 |
| 0.0971 | 6.0 | 1236 | 0.3229 | 0.8055 | 0.8846 | 0.8432 | 0.9239 |
| 0.0971 | 7.0 | 1442 | 0.3422 | 0.8028 | 0.8775 | 0.8385 | 0.9208 |
| 0.0459 | 8.0 | 1648 | 0.3215 | 0.8297 | 0.8877 | 0.8577 | 0.9253 |
| 0.0459 | 9.0 | 1854 | 0.3568 | 0.8119 | 0.8866 | 0.8476 | 0.9235 |
| 0.0285 | 10.0 | 2060 | 0.3520 | 0.8145 | 0.8887 | 0.8500 | 0.9235 |
| 0.0285 | 11.0 | 2266 | 0.3597 | 0.8255 | 0.8907 | 0.8569 | 0.9264 |
| 0.0285 | 12.0 | 2472 | 0.3599 | 0.8183 | 0.8887 | 0.8520 | 0.9266 |
| 0.0203 | 13.0 | 2678 | 0.3612 | 0.8195 | 0.8917 | 0.8541 | 0.9246 |
| 0.0203 | 14.0 | 2884 | 0.3649 | 0.8180 | 0.8917 | 0.8533 | 0.9258 |
| 0.0164 | 15.0 | 3090 | 0.3648 | 0.8205 | 0.8927 | 0.8551 | 0.9264 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "manucos/finetuned__roberta-base-biomedical-clinical-es__augmented-ultrasounds", "model-index": [{"name": "finetuned__roberta-base-biomedical-clinical-es__augmented-ultrasounds-ner", "results": []}]}
|
manucos/finetuned__roberta-base-biomedical-clinical-es__augmented-ultrasounds-ner
| null |
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:manucos/finetuned__roberta-base-biomedical-clinical-es__augmented-ultrasounds",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:14:38+00:00
|
question-answering
|
transformers
|
{}
|
titanbot/Electra-Large-SQUADV2
| null |
[
"transformers",
"pytorch",
"electra",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:16:14+00:00
|
|
text2text-generation
|
transformers
|
{}
|
neal61/bikes-ops-t5-small-22
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T22:16:47+00:00
|
|
text-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
annavtkn/rubert_sentiment_classification_model
| null |
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:16:48+00:00
|
null | null |
{"license": "creativeml-openrail-m"}
|
yanex0/penXL-loRA
| null |
[
"license:creativeml-openrail-m",
"region:us"
] | null |
2024-04-23T22:18:13+00:00
|
|
text-to-image
|
diffusers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "diffusers"}
|
rubbrband/asianBrmBeautyrealmix_v10
| null |
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null |
2024-04-23T22:18:53+00:00
|
null | null |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"language": ["zh", "en"], "license": "llama3"}
|
LeeZande/Egg1
| null |
[
"zh",
"en",
"arxiv:1910.09700",
"license:llama3",
"region:us"
] | null |
2024-04-23T22:19:10+00:00
|
text-classification
|
transformers
|
{"license": "apache-2.0"}
|
StormyCreeper/mbtiIE
| null |
[
"transformers",
"safetensors",
"albert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2024-04-23T22:20:56+00:00
|
|
text-classification
|
transformers
|
{"license": "apache-2.0"}
|
StormyCreeper/mbtiSN
| null |
[
"transformers",
"safetensors",
"albert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:22:44+00:00
|
|
text-classification
|
transformers
|
{"license": "apache-2.0"}
|
StormyCreeper/mbtiTF
| null |
[
"transformers",
"safetensors",
"albert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:24:10+00:00
|
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral7binstruct_summarize
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5569 | 0.2119 | 25 | 0.4059 |
| 0.362 | 0.4237 | 50 | 0.3475 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistral7binstruct_summarize", "results": []}]}
|
prasannab2001/mistral7binstruct_summarize
| null |
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null |
2024-04-23T22:24:53+00:00
|
text-classification
|
transformers
|
{"license": "apache-2.0"}
|
StormyCreeper/mbtiPJ
| null |
[
"transformers",
"safetensors",
"albert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:24:59+00:00
|
|
text-classification
|
transformers
|
{}
|
titanbot/Roberta-Large-RTE
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:25:17+00:00
|
|
null | null |
{}
|
Cerastes/longformer-base-4096_finetuned_ner
| null |
[
"region:us"
] | null |
2024-04-23T22:26:09+00:00
|
|
text-to-audio
|
transformers
|
{}
|
ALeblanc/Data_Voice_Cloning
| null |
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:26:50+00:00
|
|
text-generation
|
transformers
|
OpenVINO IR with int4 quantization.
To use on LocalAI use the following model definition:
```
name: phi3
backend: transformers
parameters:
model: fakezeta/Phi-3-mini-128k-instruct-ov-int4
context_size: 131072
threads: 6
trust_remote_code: true
type: OVModelForCausalLM
template:
use_tokenizer_template: true
stopwords:
- <|end|>
```
## Model Summary
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ Phi-3 GGUF: [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat).
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
messages = [
{"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| | Phi-3-Mini-4K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 |
|---|---|---|---|---|---|---|---|---|---|
| MMLU <br>5-Shot | 68.8 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 |
| HellaSwag <br> 5-Shot | 76.7 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 |
| ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 |
| GSM-8K <br> 0-Shot; CoT | 82.5 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 |
| MedQA <br> 2-Shot | 53.8 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 |
| AGIEval <br> 0-Shot | 37.5 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 |
| TriviaQA <br> 5-Shot | 64.0 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 |
| Arc-C <br> 10-Shot | 84.9 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 |
| Arc-E <br> 10-Shot | 94.6 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 |
| PIQA <br> 5-Shot | 84.2 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 |
| SociQA <br> 5-Shot | 76.6 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 |
| BigBench-Hard <br> 0-Shot | 71.7 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 |
| WinoGrande <br> 5-Shot | 70.8 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 |
| OpenBookQA <br> 10-Shot | 83.2 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 |
| BoolQ <br> 0-Shot | 77.6 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 |
| CommonSenseQA <br> 10-Shot | 80.2 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 |
| TruthfulQA <br> 10-Shot | 65.0 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 |
| HumanEval <br> 0-Shot | 59.1 | 59.1 | 54.7 | 59.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 |
| MBPP <br> 3-Shot | 53.8 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 |
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
|
{"license": "mit"}
|
fakezeta/Phi-3-mini-128k-instruct-ov-int4
| null |
[
"transformers",
"openvino",
"phi3",
"text-generation",
"conversational",
"custom_code",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:29:31+00:00
|
text-generation
|
transformers
|
{}
|
Crysiss/llama-3-8b-sql-synthetic_text_to_sql
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T22:30:13+00:00
|
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
El-chapoo/Llama_GQA-100m
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T22:30:36+00:00
|
text-generation
|
transformers
|
{}
|
pavlopt/llama2-diagnoseme-all
| null |
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T22:31:53+00:00
|
|
text-generation
|
transformers
|
{}
|
ke-lly/45516626_1
| null |
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T22:31:58+00:00
|
|
null | null |
{"license": "creativeml-openrail-m"}
|
Son1Goku/Csub
| null |
[
"license:creativeml-openrail-m",
"region:us"
] | null |
2024-04-23T22:32:45+00:00
|
|
feature-extraction
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
dayoon/e5_new_loss_epoch1_from_mel
| null |
[
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:33:06+00:00
|
text-classification
|
transformers
|
{}
|
titanbot/Roberta-Large-MRPC
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:34:11+00:00
|
|
text-generation
|
transformers
|
OpenVINO IR with int8 quantization.
To use on LocalAI use the following model definition:
```
name: phi3
backend: transformers
parameters:
model: fakezeta/Phi-3-mini-128k-instruct-ov-int8
context_size: 131072
threads: 6
trust_remote_code: true
type: OVModelForCausalLM
template:
use_tokenizer_template: true
stopwords:
- <|end|>
```
## Model Summary
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ Phi-3 GGUF: [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat).
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
messages = [
{"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| | Phi-3-Mini-4K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 |
|---|---|---|---|---|---|---|---|---|---|
| MMLU <br>5-Shot | 68.8 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 |
| HellaSwag <br> 5-Shot | 76.7 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 |
| ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 |
| GSM-8K <br> 0-Shot; CoT | 82.5 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 |
| MedQA <br> 2-Shot | 53.8 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 |
| AGIEval <br> 0-Shot | 37.5 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 |
| TriviaQA <br> 5-Shot | 64.0 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 |
| Arc-C <br> 10-Shot | 84.9 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 |
| Arc-E <br> 10-Shot | 94.6 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 |
| PIQA <br> 5-Shot | 84.2 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 |
| SociQA <br> 5-Shot | 76.6 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 |
| BigBench-Hard <br> 0-Shot | 71.7 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 |
| WinoGrande <br> 5-Shot | 70.8 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 |
| OpenBookQA <br> 10-Shot | 83.2 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 |
| BoolQ <br> 0-Shot | 77.6 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 |
| CommonSenseQA <br> 10-Shot | 80.2 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 |
| TruthfulQA <br> 10-Shot | 65.0 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 |
| HumanEval <br> 0-Shot | 59.1 | 59.1 | 54.7 | 59.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 |
| MBPP <br> 3-Shot | 53.8 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 |
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
|
{"license": "mit"}
|
fakezeta/Phi-3-mini-128k-instruct-ov-int8
| null |
[
"transformers",
"openvino",
"phi3",
"text-generation",
"conversational",
"custom_code",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:34:36+00:00
|
null | null |
{"license": "openrail"}
|
coivmn/fuko
| null |
[
"license:openrail",
"region:us"
] | null |
2024-04-23T22:35:27+00:00
|
|
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
akankshya107/llava_dpt_1
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:35:41+00:00
|
null | null |
{}
|
Anastasia2024/tinyllama_arithmetic5
| null |
[
"safetensors",
"region:us"
] | null |
2024-04-23T22:36:02+00:00
|
|
reinforcement-learning
|
stable-baselines3
|
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "266.77 +/- 19.25", "name": "mean_reward", "verified": false}]}]}]}
|
cmattoon/ppo-LunarLander-v2
| null |
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-23T22:36:41+00:00
|
text-classification
|
transformers
|
{}
|
ltuzova/amazon_helpfulness_classification_on_TAPT_pretrained_freeze
| null |
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T22:36:46+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.