pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | TitanML/tiny-jamba | null | [
"transformers",
"safetensors",
"jamba",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:30:48+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #jamba #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #jamba #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-1.1b-chat-dpo-qlora
This model is a fine-tuned version of [martimfasantos/tinyllama-1.1b-chat-sft-qlora](https://huggingface.co/martimfasantos/tinyllama-1.1b-chat-sft-qlora) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6084
- Rewards/chosen: -1.0875
- Rewards/rejected: -1.3916
- Rewards/accuracies: 0.6580
- Rewards/margins: 0.3041
- Logps/rejected: -490.8393
- Logps/chosen: -504.9714
- Logits/rejected: -2.6096
- Logits/chosen: -2.6425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6921 | 0.03 | 100 | 0.6923 | 0.0160 | 0.0142 | 0.5645 | 0.0018 | -350.2683 | -394.6286 | -2.7841 | -2.8363 |
| 0.6894 | 0.05 | 200 | 0.6894 | 0.0433 | 0.0353 | 0.5920 | 0.0080 | -348.1495 | -391.8949 | -2.7811 | -2.8333 |
| 0.6815 | 0.08 | 300 | 0.6844 | 0.0806 | 0.0609 | 0.6025 | 0.0196 | -345.5898 | -388.1692 | -2.7838 | -2.8349 |
| 0.6869 | 0.1 | 400 | 0.6788 | 0.0607 | 0.0269 | 0.6125 | 0.0339 | -348.9979 | -390.1522 | -2.7931 | -2.8423 |
| 0.6744 | 0.13 | 500 | 0.6724 | 0.0243 | -0.0249 | 0.6210 | 0.0492 | -354.1764 | -393.7983 | -2.7889 | -2.8371 |
| 0.6679 | 0.16 | 600 | 0.6625 | -0.0566 | -0.1346 | 0.6265 | 0.0780 | -365.1402 | -401.8826 | -2.7709 | -2.8179 |
| 0.637 | 0.18 | 700 | 0.6555 | -0.2568 | -0.3654 | 0.6290 | 0.1086 | -388.2211 | -421.9038 | -2.7596 | -2.8051 |
| 0.6166 | 0.21 | 800 | 0.6488 | -0.3935 | -0.5223 | 0.6320 | 0.1288 | -403.9116 | -435.5756 | -2.7523 | -2.7961 |
| 0.6335 | 0.24 | 900 | 0.6458 | -0.4516 | -0.6042 | 0.6380 | 0.1527 | -412.1083 | -441.3798 | -2.7325 | -2.7764 |
| 0.6286 | 0.26 | 1000 | 0.6406 | -0.8692 | -1.0442 | 0.625 | 0.1750 | -456.1026 | -483.1429 | -2.7123 | -2.7531 |
| 0.669 | 0.29 | 1100 | 0.6406 | -0.3445 | -0.4984 | 0.6365 | 0.1538 | -401.5222 | -430.6789 | -2.6946 | -2.7354 |
| 0.6723 | 0.31 | 1200 | 0.6358 | -0.4619 | -0.6430 | 0.6425 | 0.1811 | -415.9841 | -442.4163 | -2.6701 | -2.7077 |
| 0.605 | 0.34 | 1300 | 0.6297 | -0.6894 | -0.8903 | 0.6435 | 0.2009 | -440.7144 | -465.1627 | -2.6764 | -2.7122 |
| 0.6361 | 0.37 | 1400 | 0.6267 | -0.7144 | -0.9307 | 0.6505 | 0.2163 | -444.7496 | -467.6648 | -2.6711 | -2.7091 |
| 0.6085 | 0.39 | 1500 | 0.6213 | -1.0532 | -1.3084 | 0.6490 | 0.2552 | -482.5256 | -501.5469 | -2.6435 | -2.6797 |
| 0.6317 | 0.42 | 1600 | 0.6197 | -1.1246 | -1.3825 | 0.6490 | 0.2579 | -489.9323 | -508.6858 | -2.6172 | -2.6506 |
| 0.6702 | 0.44 | 1700 | 0.6182 | -1.0036 | -1.2644 | 0.6530 | 0.2609 | -478.1268 | -496.5815 | -2.6407 | -2.6762 |
| 0.5658 | 0.47 | 1800 | 0.6219 | -1.3479 | -1.6348 | 0.6445 | 0.2869 | -515.1606 | -531.0145 | -2.5866 | -2.6182 |
| 0.6039 | 0.5 | 1900 | 0.6154 | -0.9014 | -1.1716 | 0.6630 | 0.2702 | -468.8458 | -486.3656 | -2.6376 | -2.6742 |
| 0.6173 | 0.52 | 2000 | 0.6121 | -1.1535 | -1.4470 | 0.6575 | 0.2934 | -496.3810 | -511.5793 | -2.6232 | -2.6580 |
| 0.62 | 0.55 | 2100 | 0.6116 | -1.1600 | -1.4523 | 0.6650 | 0.2923 | -496.9117 | -512.2247 | -2.6278 | -2.6629 |
| 0.5957 | 0.58 | 2200 | 0.6132 | -0.9592 | -1.2431 | 0.6655 | 0.2839 | -475.9958 | -492.1489 | -2.6317 | -2.6674 |
| 0.6093 | 0.6 | 2300 | 0.6138 | -1.0935 | -1.3811 | 0.6625 | 0.2876 | -489.7906 | -505.5738 | -2.6283 | -2.6619 |
| 0.6009 | 0.63 | 2400 | 0.6108 | -1.0519 | -1.3479 | 0.6610 | 0.2959 | -486.4695 | -501.4175 | -2.6088 | -2.6432 |
| 0.5988 | 0.65 | 2500 | 0.6108 | -1.0427 | -1.3419 | 0.6590 | 0.2992 | -485.8730 | -500.4982 | -2.6143 | -2.6477 |
| 0.606 | 0.68 | 2600 | 0.6112 | -1.0188 | -1.3192 | 0.6545 | 0.3003 | -483.6013 | -498.1078 | -2.5974 | -2.6304 |
| 0.6118 | 0.71 | 2700 | 0.6106 | -1.0808 | -1.3857 | 0.6595 | 0.3049 | -490.2562 | -504.3045 | -2.5945 | -2.6274 |
| 0.6134 | 0.73 | 2800 | 0.6096 | -1.1549 | -1.4635 | 0.6585 | 0.3086 | -498.0366 | -511.7179 | -2.5978 | -2.6303 |
| 0.6159 | 0.76 | 2900 | 0.6097 | -1.0550 | -1.3509 | 0.6585 | 0.2959 | -486.7739 | -501.7256 | -2.6175 | -2.6500 |
| 0.5815 | 0.79 | 3000 | 0.6091 | -1.1025 | -1.4048 | 0.6570 | 0.3023 | -492.1650 | -506.4727 | -2.6089 | -2.6420 |
| 0.5885 | 0.81 | 3100 | 0.6089 | -1.0977 | -1.4006 | 0.6595 | 0.3029 | -491.7444 | -505.9960 | -2.6001 | -2.6337 |
| 0.6074 | 0.84 | 3200 | 0.6086 | -1.0982 | -1.4029 | 0.6605 | 0.3047 | -491.9724 | -506.0455 | -2.6056 | -2.6388 |
| 0.5981 | 0.86 | 3300 | 0.6087 | -1.0853 | -1.3881 | 0.6610 | 0.3028 | -490.4915 | -504.7571 | -2.6117 | -2.6442 |
| 0.5944 | 0.89 | 3400 | 0.6087 | -1.0897 | -1.3931 | 0.6580 | 0.3034 | -490.9887 | -505.1947 | -2.6026 | -2.6360 |
| 0.5979 | 0.92 | 3500 | 0.6085 | -1.0922 | -1.3962 | 0.6595 | 0.3040 | -491.3070 | -505.4438 | -2.6136 | -2.6460 |
| 0.6154 | 0.94 | 3600 | 0.6086 | -1.0905 | -1.3946 | 0.6595 | 0.3040 | -491.1413 | -505.2781 | -2.6066 | -2.6397 |
| 0.6053 | 0.97 | 3700 | 0.6086 | -1.0907 | -1.3946 | 0.6550 | 0.3039 | -491.1405 | -505.2943 | -2.6094 | -2.6423 |
| 0.602 | 0.99 | 3800 | 0.6085 | -1.0876 | -1.3914 | 0.6580 | 0.3038 | -490.8211 | -504.9807 | -2.6096 | -2.6425 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "model-index": [{"name": "tinyllama-1.1b-chat-dpo-qlora", "results": []}]} | martimfasantos/tinyllama-1.1b-chat-dpo-qlora | null | [
"peft",
"tensorboard",
"safetensors",
"llama",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"license:apache-2.0",
"4-bit",
"region:us"
] | null | 2024-04-23T21:32:30+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #llama #alignment-handbook #trl #dpo #generated_from_trainer #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #license-apache-2.0 #4-bit #region-us
| tinyllama-1.1b-chat-dpo-qlora
=============================
This model is a fine-tuned version of martimfasantos/tinyllama-1.1b-chat-sft-qlora on the HuggingFaceH4/ultrafeedback\_binarized dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6084
* Rewards/chosen: -1.0875
* Rewards/rejected: -1.3916
* Rewards/accuracies: 0.6580
* Rewards/margins: 0.3041
* Logps/rejected: -490.8393
* Logps/chosen: -504.9714
* Logits/rejected: -2.6096
* Logits/chosen: -2.6425
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-06
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 1
### Training results
### Framework versions
* PEFT 0.7.1
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #llama #alignment-handbook #trl #dpo #generated_from_trainer #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #license-apache-2.0 #4-bit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers | # GALAXY-16B-v1.0

## Technical notes
- 72 layers,DUS procedure, mistral(32)->SOLAR(48)->GALAXY(72)
- 16B parameters
- model created as a extension of depth upscaling procedure used for SOLAR by upstage
## Results
- model can and will produce NSFW content
- waiting for eval results | {"language": ["en"], "license": "apache-2.0", "tags": ["not-for-all-audiences"], "datasets": ["Intel/orca_dpo_pairs", "athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW", "Open-Orca/SlimOrca", "MinervaAI/Aesir-Preview", "allenai/ultrafeedback_binarized_cleaned"]} | TeeZee/GALAXY-16B-v1.0-bpw8.0-h8-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"conversational",
"en",
"dataset:Intel/orca_dpo_pairs",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW",
"dataset:Open-Orca/SlimOrca",
"dataset:MinervaAI/Aesir-Preview",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T21:36:10+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #en #dataset-Intel/orca_dpo_pairs #dataset-athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW #dataset-Open-Orca/SlimOrca #dataset-MinervaAI/Aesir-Preview #dataset-allenai/ultrafeedback_binarized_cleaned #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # GALAXY-16B-v1.0
!image/png
## Technical notes
- 72 layers,DUS procedure, mistral(32)->SOLAR(48)->GALAXY(72)
- 16B parameters
- model created as a extension of depth upscaling procedure used for SOLAR by upstage
## Results
- model can and will produce NSFW content
- waiting for eval results | [
"# GALAXY-16B-v1.0\n\n!image/png",
"## Technical notes\n- 72 layers,DUS procedure, mistral(32)->SOLAR(48)->GALAXY(72)\n- 16B parameters\n- model created as a extension of depth upscaling procedure used for SOLAR by upstage",
"## Results\n- model can and will produce NSFW content\n- waiting for eval results"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #en #dataset-Intel/orca_dpo_pairs #dataset-athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW #dataset-Open-Orca/SlimOrca #dataset-MinervaAI/Aesir-Preview #dataset-allenai/ultrafeedback_binarized_cleaned #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GALAXY-16B-v1.0\n\n!image/png",
"## Technical notes\n- 72 layers,DUS procedure, mistral(32)->SOLAR(48)->GALAXY(72)\n- 16B parameters\n- model created as a extension of depth upscaling procedure used for SOLAR by upstage",
"## Results\n- model can and will produce NSFW content\n- waiting for eval results"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_2ep
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.5457
- eval_runtime: 2.9825
- eval_samples_per_second: 67.059
- eval_steps_per_second: 8.382
- epoch: 1.9968
- step: 156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1 | {"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_2ep", "results": []}]} | mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_2ep | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | 2024-04-23T21:37:00+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us
|
# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_2ep
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.5457
- eval_runtime: 2.9825
- eval_samples_per_second: 67.059
- eval_steps_per_second: 8.382
- epoch: 1.9968
- step: 156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1 | [
"# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_2ep\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 2.5457\n- eval_runtime: 2.9825\n- eval_samples_per_second: 67.059\n- eval_steps_per_second: 8.382\n- epoch: 1.9968\n- step: 156",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us \n",
"# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_2ep\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 2.5457\n- eval_runtime: 2.9825\n- eval_samples_per_second: 67.059\n- eval_steps_per_second: 8.382\n- epoch: 1.9968\n- step: 156",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_3ep
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.2242
- eval_runtime: 2.8668
- eval_samples_per_second: 69.763
- eval_steps_per_second: 8.72
- epoch: 2.9952
- step: 234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1 | {"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_3ep", "results": []}]} | mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_3ep | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | 2024-04-23T21:37:16+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us
|
# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_3ep
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.2242
- eval_runtime: 2.8668
- eval_samples_per_second: 69.763
- eval_steps_per_second: 8.72
- epoch: 2.9952
- step: 234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1 | [
"# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_3ep\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 2.2242\n- eval_runtime: 2.8668\n- eval_samples_per_second: 69.763\n- eval_steps_per_second: 8.72\n- epoch: 2.9952\n- step: 234",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us \n",
"# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_3ep\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 2.2242\n- eval_runtime: 2.8668\n- eval_samples_per_second: 69.763\n- eval_steps_per_second: 8.72\n- epoch: 2.9952\n- step: 234",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.19.1"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# v2-WtP-FT-12L-256BS-UD-Opus-cUD-cOpus
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1157
- Precision: 0.6058
- Recall: 0.73
- F1: 0.6621
- Threshold: 0.4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Threshold |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:---------:|
| No log | 0.59 | 250 | 0.0430 | 0.9104 | 0.915 | 0.9127 | 0.4 |
| No log | 0.59 | 250 | 0.0173 | 0.8413 | 0.875 | 0.8578 | 0.4 |
| No log | 0.59 | 250 | 0.0374 | 0.8814 | 0.855 | 0.8680 | 0.5 |
| No log | 0.59 | 250 | 0.0191 | 0.8539 | 0.935 | 0.8926 | 0.2 |
| No log | 0.59 | 250 | 0.0298 | 0.9391 | 0.925 | 0.9320 | 0.6 |
| No log | 0.59 | 250 | 0.0104 | 0.9755 | 0.995 | 0.9851 | 0.8 |
| No log | 0.59 | 250 | 0.0161 | 0.9391 | 0.9296 | 0.9343 | 0.6 |
| No log | 0.59 | 250 | 0.0104 | 0.9706 | 0.99 | 0.9802 | 0.7000 |
| No log | 0.59 | 250 | 0.0162 | 0.9387 | 0.995 | 0.9660 | 0.7000 |
| No log | 0.59 | 250 | 0.0376 | 0.9091 | 0.9 | 0.9045 | 0.5 |
| No log | 0.59 | 250 | 0.0119 | 0.9522 | 0.995 | 0.9731 | 0.6 |
| No log | 0.59 | 250 | 0.0178 | 0.9234 | 0.965 | 0.9438 | 0.8 |
| No log | 0.59 | 250 | 0.0089 | 0.9479 | 1.0 | 0.9732 | 0.3000 |
| No log | 0.59 | 250 | 0.0239 | 0.9299 | 0.995 | 0.9614 | 0.7000 |
| No log | 0.59 | 250 | 0.0165 | 0.9431 | 0.995 | 0.9684 | 0.5 |
| No log | 0.59 | 250 | 0.0118 | 0.9423 | 0.98 | 0.9608 | 0.6 |
| No log | 0.59 | 250 | 0.0166 | 0.95 | 0.9645 | 0.9572 | 0.9 |
| No log | 0.59 | 250 | 0.0153 | 0.9245 | 0.98 | 0.9515 | 0.6 |
| No log | 0.59 | 250 | 0.0529 | 0.9101 | 0.8141 | 0.8594 | 0.7000 |
| No log | 0.59 | 250 | 0.0183 | 0.9299 | 0.995 | 0.9614 | 0.7000 |
| No log | 0.59 | 250 | 0.0124 | 0.9249 | 0.985 | 0.9540 | 0.4 |
| No log | 0.59 | 250 | 0.0415 | 0.9505 | 0.96 | 0.9552 | 0.3000 |
| No log | 0.59 | 250 | 0.0060 | 0.9793 | 0.945 | 0.9618 | 0.7000 |
| No log | 0.59 | 250 | 0.0097 | 0.9552 | 0.9746 | 0.9648 | 0.4 |
| No log | 0.59 | 250 | 0.0221 | 0.9423 | 0.98 | 0.9608 | 0.6 |
| No log | 0.59 | 250 | 0.0602 | 0.8537 | 0.875 | 0.8642 | 0.4 |
| No log | 0.59 | 250 | 0.0082 | 0.9122 | 0.9397 | 0.9257 | 0.5 |
| No log | 0.59 | 250 | 0.0245 | 0.8884 | 0.995 | 0.9387 | 0.3000 |
| No log | 0.59 | 250 | 0.0221 | 0.9128 | 0.89 | 0.9013 | 0.6 |
| No log | 0.59 | 250 | 0.0159 | 0.9476 | 0.995 | 0.9707 | 0.4 |
| No log | 0.59 | 250 | 0.0345 | 0.8995 | 0.985 | 0.9403 | 0.064 |
| No log | 0.59 | 250 | 0.0259 | 0.9387 | 0.995 | 0.9660 | 0.6 |
| No log | 0.59 | 250 | 0.0154 | 0.9588 | 0.93 | 0.9442 | 0.5 |
| No log | 0.59 | 250 | 0.0115 | 0.9709 | 1.0 | 0.9852 | 0.5 |
| No log | 0.59 | 250 | 0.0104 | 0.975 | 0.975 | 0.975 | 0.7000 |
| No log | 0.59 | 250 | 0.0812 | 0.9123 | 0.78 | 0.8410 | 0.5 |
| No log | 0.59 | 250 | 0.0137 | 0.9375 | 0.975 | 0.9559 | 0.7000 |
| No log | 0.59 | 250 | 0.0257 | 0.9610 | 0.985 | 0.9728 | 0.062 |
| No log | 0.59 | 250 | 0.0739 | 0.8167 | 0.7387 | 0.7757 | 0.2 |
| No log | 0.59 | 250 | 0.0484 | 0.9275 | 0.8995 | 0.9133 | 0.3000 |
| No log | 0.59 | 250 | 0.0569 | 0.8267 | 0.93 | 0.8753 | 0.5 |
| No log | 0.59 | 250 | 0.0152 | 0.9265 | 0.945 | 0.9356 | 0.2 |
| No log | 0.59 | 250 | 0.0146 | 0.9801 | 0.985 | 0.9825 | 0.3000 |
| No log | 0.59 | 250 | 0.0058 | 0.9604 | 0.9749 | 0.9676 | 0.4 |
| No log | 0.59 | 250 | 0.0092 | 0.9686 | 0.925 | 0.9463 | 0.9 |
| No log | 0.59 | 250 | 0.0055 | 0.9747 | 0.965 | 0.9698 | 0.9 |
| No log | 0.59 | 250 | 0.0111 | 0.9524 | 1.0 | 0.9756 | 0.6 |
| No log | 0.59 | 250 | 0.0345 | 0.8884 | 0.955 | 0.9205 | 0.5 |
| No log | 0.59 | 250 | 0.0179 | 0.9852 | 1.0 | 0.9926 | 0.2 |
| No log | 0.59 | 250 | 0.0214 | 0.9517 | 0.985 | 0.9681 | 0.3000 |
| No log | 0.59 | 250 | 0.0188 | 0.9612 | 0.99 | 0.9754 | 0.8 |
| No log | 0.59 | 250 | 0.0075 | 0.9365 | 0.8985 | 0.9171 | 0.9 |
| No log | 0.59 | 250 | 0.0661 | 0.8122 | 0.8 | 0.8060 | 0.2 |
| No log | 0.59 | 250 | 0.0637 | 0.8495 | 0.875 | 0.8621 | 0.3000 |
| No log | 0.59 | 250 | 0.0137 | 0.9657 | 0.985 | 0.9752 | 0.9 |
| No log | 0.59 | 250 | 0.0154 | 0.9524 | 1.0 | 0.9756 | 0.3000 |
| No log | 0.59 | 250 | 0.1067 | 0.7964 | 0.88 | 0.8361 | 0.2 |
| No log | 0.59 | 250 | 0.0097 | 0.9522 | 0.995 | 0.9731 | 0.5 |
| No log | 0.59 | 250 | 0.1296 | 0.8382 | 0.855 | 0.8465 | 0.4 |
| No log | 0.59 | 250 | 0.0123 | 0.9524 | 1.0 | 0.9756 | 0.7000 |
| No log | 0.59 | 250 | 0.0092 | 0.9707 | 0.995 | 0.9827 | 0.4 |
| No log | 0.59 | 250 | 0.0073 | 0.9372 | 0.97 | 0.9533 | 0.7000 |
| No log | 0.59 | 250 | 0.0497 | 0.9055 | 0.91 | 0.9077 | 0.5 |
| No log | 0.59 | 250 | 0.0071 | 0.9706 | 0.99 | 0.9802 | 0.7000 |
| No log | 0.59 | 250 | 0.0119 | 0.9706 | 0.99 | 0.9802 | 0.9 |
| No log | 0.59 | 250 | 0.0136 | 0.9463 | 0.97 | 0.9580 | 0.9 |
| No log | 0.59 | 250 | 0.0165 | 0.9567 | 0.995 | 0.9755 | 0.2 |
| No log | 0.59 | 250 | 0.0083 | 0.9615 | 1.0 | 0.9804 | 0.6 |
| No log | 0.59 | 250 | 0.0331 | 0.9135 | 0.845 | 0.8779 | 0.4 |
| No log | 0.59 | 250 | 0.0670 | 0.8756 | 0.845 | 0.8601 | 0.4 |
| No log | 0.59 | 250 | 0.0113 | 0.9108 | 0.97 | 0.9395 | 0.3000 |
| No log | 0.59 | 250 | 0.0684 | 0.8018 | 0.87 | 0.8345 | 0.6 |
| No log | 0.59 | 250 | 0.0122 | 0.9476 | 0.995 | 0.9707 | 0.2 |
| No log | 0.59 | 250 | 0.0186 | 0.9245 | 0.98 | 0.9515 | 0.6 |
| No log | 0.59 | 250 | 0.0204 | 0.8585 | 0.88 | 0.8691 | 0.6 |
| No log | 0.59 | 250 | 0.0088 | 0.9479 | 0.91 | 0.9286 | 0.5 |
| No log | 0.59 | 250 | 0.0176 | 0.9346 | 1.0 | 0.9662 | 0.2 |
| No log | 0.59 | 250 | 0.0157 | 0.9529 | 0.91 | 0.9309 | 0.6 |
| No log | 0.59 | 250 | 0.0550 | 0.8720 | 0.92 | 0.8954 | 0.2 |
| No log | 0.59 | 250 | 0.0230 | 0.875 | 0.91 | 0.8922 | 0.4 |
| No log | 0.59 | 250 | 0.0322 | 0.8670 | 0.8889 | 0.8778 | 0.2 |
| No log | 0.59 | 250 | 0.0325 | 0.9630 | 0.91 | 0.9357 | 0.6 |
| No log | 0.59 | 250 | 0.1328 | 0.7940 | 0.79 | 0.7920 | 0.4 |
| No log | 0.59 | 250 | 0.0253 | 0.8267 | 0.835 | 0.8308 | 0.5 |
| No log | 0.59 | 250 | 0.0647 | 0.6867 | 0.855 | 0.7617 | 0.3000 |
| No log | 0.59 | 250 | 0.0258 | 0.7906 | 0.925 | 0.8525 | 0.3000 |
| No log | 0.59 | 250 | 0.0857 | 0.8333 | 0.8 | 0.8163 | 0.4 |
| No log | 0.59 | 250 | 0.0938 | 0.732 | 0.915 | 0.8133 | 0.3000 |
| No log | 0.59 | 250 | 0.0724 | 0.5541 | 0.4372 | 0.4888 | 0.4 |
| No log | 0.59 | 250 | 0.0525 | 0.7787 | 0.915 | 0.8414 | 0.3000 |
| No log | 0.59 | 250 | 0.0538 | 0.86 | 0.86 | 0.8600 | 0.6 |
| No log | 0.59 | 250 | 0.1075 | 0.7843 | 0.8 | 0.7921 | 0.4 |
| No log | 0.59 | 250 | 0.0536 | 0.7879 | 0.91 | 0.8445 | 0.4 |
| No log | 0.59 | 250 | 0.0341 | 0.8216 | 0.875 | 0.8475 | 0.5 |
| No log | 0.59 | 250 | 0.0674 | 0.7762 | 0.815 | 0.7951 | 0.5 |
| No log | 0.59 | 250 | 0.0671 | 0.9021 | 0.875 | 0.8883 | 0.7000 |
| No log | 0.59 | 250 | 0.0626 | 0.8969 | 0.87 | 0.8832 | 0.7000 |
| No log | 0.59 | 250 | 0.0498 | 0.8307 | 0.785 | 0.8072 | 0.6 |
| No log | 0.59 | 250 | 0.0419 | 0.7860 | 0.8492 | 0.8164 | 0.5 |
| No log | 0.59 | 250 | 0.0615 | 0.7732 | 0.75 | 0.7614 | 0.5 |
| No log | 0.59 | 250 | 0.0806 | 0.7124 | 0.83 | 0.7667 | 0.5 |
| No log | 0.59 | 250 | 0.0570 | 0.8381 | 0.88 | 0.8585 | 0.5 |
| No log | 0.59 | 250 | 0.0404 | 0.8602 | 0.8 | 0.8290 | 0.6 |
| No log | 0.59 | 250 | 0.1475 | 0.7015 | 0.94 | 0.8034 | 0.062 |
| No log | 0.59 | 250 | 0.0237 | 0.8466 | 0.8 | 0.8226 | 0.5 |
| No log | 0.59 | 250 | 0.0517 | 0.8020 | 0.8223 | 0.8120 | 0.4 |
| No log | 0.59 | 250 | 0.0732 | 0.8224 | 0.88 | 0.8502 | 0.5 |
| No log | 0.59 | 250 | 0.1005 | 0.6875 | 0.6633 | 0.6752 | 0.3000 |
| No log | 0.59 | 250 | 0.0285 | 0.7427 | 0.765 | 0.7537 | 0.4 |
| No log | 0.59 | 250 | 0.0934 | 0.6889 | 0.93 | 0.7915 | 0.4 |
| No log | 0.59 | 250 | 0.0430 | 0.7968 | 0.745 | 0.7700 | 0.5 |
| No log | 0.59 | 250 | 0.0675 | 0.805 | 0.805 | 0.805 | 0.5 |
| No log | 0.59 | 250 | 0.0738 | 0.9056 | 0.815 | 0.8579 | 0.6 |
| No log | 0.59 | 250 | 0.1196 | 0.7336 | 0.84 | 0.7832 | 0.5 |
| No log | 0.59 | 250 | 0.0812 | 0.6231 | 0.835 | 0.7137 | 0.2 |
| No log | 0.59 | 250 | 0.0760 | 0.7662 | 0.77 | 0.7681 | 0.5 |
| No log | 0.59 | 250 | 0.0524 | 0.7792 | 0.9045 | 0.8372 | 0.4 |
| No log | 0.59 | 250 | 0.1207 | 0.7711 | 0.775 | 0.7731 | 0.4 |
| No log | 0.59 | 250 | 0.0881 | 0.3414 | 0.565 | 0.4256 | 0.3000 |
| No log | 0.59 | 250 | 0.1086 | 0.8507 | 0.855 | 0.8529 | 0.3000 |
| No log | 0.59 | 250 | 0.1118 | 0.6136 | 0.6784 | 0.6444 | 0.1 |
| No log | 0.59 | 250 | 0.1151 | 0.8382 | 0.7286 | 0.7796 | 0.3000 |
| No log | 0.59 | 250 | 0.0918 | 0.7185 | 0.855 | 0.7808 | 0.4 |
| No log | 0.59 | 250 | 0.0311 | 0.8194 | 0.8939 | 0.8551 | 0.2 |
| No log | 0.59 | 250 | 0.0843 | 0.8372 | 0.9 | 0.8675 | 0.3000 |
| No log | 0.59 | 250 | 0.0297 | 0.8710 | 0.8141 | 0.8416 | 0.5 |
| No log | 0.59 | 250 | 0.0345 | 0.8245 | 0.775 | 0.7990 | 0.6 |
| No log | 0.59 | 250 | 0.0439 | 0.6682 | 0.705 | 0.6861 | 0.5 |
| No log | 0.59 | 250 | 0.0690 | 0.8221 | 0.855 | 0.8382 | 0.6 |
| No log | 0.59 | 250 | 0.0684 | 0.6849 | 0.75 | 0.7160 | 0.4 |
| No log | 0.59 | 250 | 0.0747 | 0.9130 | 0.945 | 0.9287 | 0.3000 |
| No log | 0.59 | 250 | 0.0890 | 0.8272 | 0.67 | 0.7403 | 0.5 |
| No log | 0.59 | 250 | 0.1415 | 0.7436 | 0.725 | 0.7342 | 0.6 |
| No log | 0.59 | 250 | 0.0252 | 0.7975 | 0.6332 | 0.7059 | 0.6 |
| No log | 0.59 | 250 | 0.0903 | 0.65 | 0.8492 | 0.7364 | 0.097 |
| No log | 0.59 | 250 | 0.1004 | 0.8342 | 0.83 | 0.8321 | 0.4 |
| No log | 0.59 | 250 | 0.0544 | 0.8136 | 0.895 | 0.8524 | 0.6 |
| No log | 0.59 | 250 | 0.0663 | 0.8738 | 0.9 | 0.8867 | 0.6 |
| No log | 0.59 | 250 | 0.1370 | 0.8219 | 0.6 | 0.6936 | 0.4 |
| No log | 0.59 | 250 | 0.0606 | 0.8122 | 0.865 | 0.8378 | 0.5 |
| No log | 0.59 | 250 | 0.1426 | 0.7008 | 0.89 | 0.7841 | 0.2 |
| No log | 0.59 | 250 | 0.0403 | 0.8089 | 0.91 | 0.8565 | 0.5 |
| No log | 0.59 | 250 | 0.0659 | 0.9157 | 0.76 | 0.8306 | 0.7000 |
| No log | 0.59 | 250 | 0.0170 | 0.8423 | 0.935 | 0.8863 | 0.5 |
| No log | 0.59 | 250 | 0.1061 | 0.8053 | 0.765 | 0.7846 | 0.6 |
| No log | 0.59 | 250 | 0.0421 | 0.8646 | 0.83 | 0.8469 | 0.7000 |
| No log | 0.59 | 250 | 0.0640 | 0.7650 | 0.895 | 0.8249 | 0.4 |
| No log | 0.59 | 250 | 0.0498 | 0.7900 | 0.865 | 0.8258 | 0.5 |
| No log | 0.59 | 250 | 0.0939 | 0.7689 | 0.815 | 0.7913 | 0.5 |
| No log | 0.59 | 250 | 0.0372 | 0.8632 | 0.915 | 0.8883 | 0.5 |
| No log | 0.59 | 250 | 0.0759 | 0.5760 | 0.625 | 0.5995 | 0.2 |
| No log | 0.59 | 250 | 0.1436 | 0.6419 | 0.69 | 0.6651 | 0.3000 |
| No log | 0.59 | 250 | 0.0303 | 0.8019 | 0.83 | 0.8157 | 0.3000 |
| No log | 0.59 | 250 | 0.0773 | 0.6996 | 0.92 | 0.7948 | 0.4 |
| No log | 0.59 | 250 | 0.0922 | 0.8462 | 0.825 | 0.8354 | 0.6 |
| No log | 0.59 | 250 | 0.0637 | 0.815 | 0.815 | 0.815 | 0.6 |
| No log | 0.59 | 250 | 0.0293 | 0.8028 | 0.855 | 0.8281 | 0.6 |
| No log | 0.59 | 250 | 0.0186 | 0.8302 | 0.88 | 0.8544 | 0.3000 |
| No log | 0.59 | 250 | 0.1214 | 0.7610 | 0.78 | 0.7704 | 0.6 |
| No log | 0.59 | 250 | 0.0634 | 0.6735 | 0.66 | 0.6667 | 0.4 |
| No log | 0.59 | 250 | 0.0853 | 0.8491 | 0.9 | 0.8738 | 0.3000 |
| No log | 0.59 | 250 | 0.1008 | 0.4034 | 0.71 | 0.5145 | 0.075 |
| No log | 0.59 | 250 | 0.0388 | 0.8586 | 0.8283 | 0.8432 | 0.4 |
| No log | 0.59 | 250 | 0.0895 | 0.7566 | 0.855 | 0.8028 | 0.2 |
| No log | 0.59 | 250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.007 |
| No log | 0.59 | 250 | 0.0171 | 0.6667 | 0.9239 | 0.7745 | 0.5 |
| No log | 0.59 | 250 | 0.0055 | 0.8844 | 0.995 | 0.9365 | 0.2 |
| No log | 0.59 | 250 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.042 |
| No log | 0.59 | 250 | 0.0074 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 0.59 | 250 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.08 |
| No log | 0.59 | 250 | 0.0035 | 0.9947 | 1.0 | 0.9973 | 0.4 |
| No log | 0.59 | 250 | 0.0029 | 0.9755 | 0.995 | 0.9851 | 0.3000 |
| No log | 0.59 | 250 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.032 |
| No log | 0.59 | 250 | 0.0025 | 0.9900 | 0.995 | 0.9925 | 0.6 |
| No log | 0.59 | 250 | 0.0020 | 1.0 | 1.0 | 1.0 | 0.7000 |
| No log | 0.59 | 250 | 0.0071 | 0.9655 | 0.98 | 0.9727 | 0.024 |
| No log | 0.59 | 250 | 0.0123 | 0.9946 | 0.915 | 0.9531 | 0.5 |
| No log | 0.59 | 250 | 0.0009 | 1.0 | 1.0 | 1.0 | 0.7000 |
| No log | 0.59 | 250 | 0.0166 | 0.9945 | 0.91 | 0.9504 | 0.6 |
| No log | 0.59 | 250 | 0.0016 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 0.59 | 250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.007 |
| No log | 0.59 | 250 | 0.0040 | 0.9949 | 0.985 | 0.9899 | 0.3000 |
| No log | 0.59 | 250 | 0.0014 | 0.995 | 0.995 | 0.995 | 0.6 |
| No log | 0.59 | 250 | 0.0055 | 0.9524 | 1.0 | 0.9756 | 0.5 |
| No log | 0.59 | 250 | 0.0409 | 0.8230 | 0.86 | 0.8411 | 0.5 |
| No log | 0.59 | 250 | 0.0007 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 0.59 | 250 | 0.0030 | 0.9899 | 0.98 | 0.9849 | 0.3000 |
| No log | 0.59 | 250 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 0.59 | 250 | 0.0015 | 0.9900 | 0.995 | 0.9925 | 0.6 |
| No log | 0.59 | 250 | 0.0017 | 0.995 | 0.995 | 0.995 | 0.3000 |
| No log | 0.59 | 250 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 0.59 | 250 | 0.0048 | 0.9512 | 0.975 | 0.9630 | 0.5 |
| No log | 0.59 | 250 | 0.0008 | 1.0 | 0.995 | 0.9975 | 0.7000 |
| No log | 0.59 | 250 | 0.0132 | 0.9897 | 0.96 | 0.9746 | 0.2 |
| No log | 0.59 | 250 | 0.0008 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 0.59 | 250 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.011 |
| No log | 0.59 | 250 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 0.59 | 250 | 0.0037 | 0.995 | 0.995 | 0.995 | 0.5 |
| No log | 0.59 | 250 | 0.0020 | 0.9852 | 1.0 | 0.9926 | 0.3000 |
| No log | 0.59 | 250 | 0.0013 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 0.59 | 250 | 0.0039 | 0.9792 | 1.0 | 0.9895 | 0.4 |
| No log | 0.59 | 250 | 0.0045 | 0.9206 | 0.985 | 0.9517 | 0.2 |
| No log | 0.59 | 250 | 0.0011 | 1.0 | 1.0 | 1.0 | 0.8 |
| No log | 0.59 | 250 | 0.0027 | 0.9756 | 1.0 | 0.9877 | 0.0520 |
| No log | 0.59 | 250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.003 |
| No log | 0.59 | 250 | 0.0032 | 0.9851 | 0.995 | 0.9900 | 0.3000 |
| No log | 0.59 | 250 | 0.0024 | 0.9899 | 0.985 | 0.9875 | 0.8 |
| No log | 0.59 | 250 | 0.0192 | 0.9340 | 0.9293 | 0.9316 | 0.8 |
| No log | 0.59 | 250 | 0.0008 | 1.0 | 1.0 | 1.0 | 0.8 |
| No log | 0.59 | 250 | 0.0046 | 0.9706 | 0.99 | 0.9802 | 0.7000 |
| No log | 0.59 | 250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.006 |
| No log | 0.59 | 250 | 0.0007 | 0.9950 | 1.0 | 0.9975 | 0.7000 |
| No log | 0.59 | 250 | 0.0142 | 0.9431 | 0.995 | 0.9684 | 0.029 |
| No log | 0.59 | 250 | 0.0014 | 0.9917 | 1.0 | 0.9959 | 0.04 |
| No log | 0.59 | 250 | 0.0150 | 0.9418 | 0.89 | 0.9152 | 0.6 |
| No log | 0.59 | 250 | 0.0078 | 0.9901 | 1.0 | 0.9950 | 0.029 |
| No log | 0.59 | 250 | 0.0021 | 0.9851 | 0.99 | 0.9875 | 0.5 |
| No log | 0.59 | 250 | 0.0022 | 0.9901 | 1.0 | 0.9950 | 0.7000 |
| No log | 0.59 | 250 | 0.0021 | 1.0 | 0.995 | 0.9975 | 0.8 |
| No log | 0.59 | 250 | 0.0097 | 0.8083 | 0.97 | 0.8818 | 0.4 |
| No log | 0.59 | 250 | 0.0048 | 1.0 | 0.99 | 0.9950 | 0.028 |
| No log | 0.59 | 250 | 0.0061 | 0.9898 | 0.975 | 0.9824 | 0.7000 |
| No log | 0.59 | 250 | 0.0177 | 0.7562 | 0.7716 | 0.7638 | 0.7000 |
| No log | 0.59 | 250 | 0.0086 | 0.8899 | 0.97 | 0.9282 | 0.3000 |
| No log | 0.59 | 250 | 0.0414 | 0.9333 | 0.84 | 0.8842 | 0.6 |
| No log | 0.59 | 250 | 0.1791 | 0.4894 | 0.8214 | 0.6133 | 0.0270 |
| No log | 0.59 | 250 | 0.0076 | 0.9307 | 0.94 | 0.9353 | 0.6 |
| No log | 0.59 | 250 | 0.0829 | 0.7860 | 0.8989 | 0.8387 | 0.4 |
| No log | 0.59 | 250 | 0.0309 | 0.8423 | 0.935 | 0.8863 | 0.3000 |
| No log | 0.59 | 250 | 0.0308 | 0.8854 | 0.85 | 0.8673 | 0.5 |
| No log | 0.59 | 250 | 0.0247 | 0.9010 | 0.91 | 0.9055 | 0.4 |
| No log | 0.59 | 250 | 0.0284 | 0.8578 | 0.965 | 0.9082 | 0.4 |
| No log | 0.59 | 250 | 0.0207 | 0.9010 | 0.865 | 0.8827 | 0.4 |
| No log | 0.59 | 250 | 0.0356 | 0.8462 | 0.825 | 0.8354 | 0.5 |
| No log | 0.59 | 250 | 0.0195 | 0.8365 | 0.87 | 0.8529 | 0.5 |
| No log | 0.59 | 250 | 0.0418 | 0.7816 | 0.805 | 0.7931 | 0.4 |
| No log | 0.59 | 250 | 0.0498 | 0.8418 | 0.825 | 0.8333 | 0.4 |
| No log | 0.59 | 250 | 0.0026 | 0.995 | 0.995 | 0.995 | 0.2 |
| No log | 0.59 | 250 | 0.0342 | 0.8075 | 0.86 | 0.8329 | 0.5 |
| No log | 0.59 | 250 | 0.0261 | 0.8259 | 0.83 | 0.8279 | 0.5 |
| No log | 0.59 | 250 | 0.0312 | 0.8158 | 0.7828 | 0.7990 | 0.5 |
| No log | 0.59 | 250 | 0.0708 | 0.6948 | 0.7437 | 0.7184 | 0.5 |
| No log | 0.59 | 250 | 0.0244 | 0.8579 | 0.845 | 0.8514 | 0.4 |
| No log | 0.59 | 250 | 0.0174 | 0.8894 | 0.885 | 0.8872 | 0.4 |
| No log | 0.59 | 250 | 0.0101 | 0.9439 | 0.925 | 0.9343 | 0.5 |
| No log | 0.59 | 250 | 0.0325 | 0.7570 | 0.81 | 0.7826 | 0.6 |
| No log | 0.59 | 250 | 0.0319 | 0.8317 | 0.84 | 0.8358 | 0.4 |
| No log | 0.59 | 250 | 0.0304 | 0.8479 | 0.92 | 0.8825 | 0.4 |
| No log | 0.59 | 250 | 0.0278 | 0.7182 | 0.79 | 0.7524 | 0.4 |
| No log | 0.59 | 250 | 0.0305 | 0.8426 | 0.83 | 0.8363 | 0.5 |
| No log | 0.59 | 250 | 0.0252 | 0.9388 | 0.92 | 0.9293 | 0.2 |
| No log | 0.59 | 250 | 0.0623 | 0.7347 | 0.72 | 0.7273 | 0.4 |
| No log | 0.59 | 250 | 0.0106 | 0.9898 | 0.975 | 0.9824 | 0.4 |
| No log | 0.59 | 250 | 0.0009 | 1.0 | 0.995 | 0.9975 | 0.7000 |
| No log | 0.59 | 250 | 0.0244 | 0.8640 | 0.985 | 0.9206 | 0.09 |
| No log | 0.59 | 250 | 0.0411 | 0.8128 | 0.76 | 0.7855 | 0.5 |
| No log | 0.59 | 250 | 0.0431 | 0.7811 | 0.785 | 0.7830 | 0.5 |
| No log | 0.59 | 250 | 0.1814 | 0.4565 | 0.4468 | 0.4516 | 0.3000 |
| No log | 0.59 | 250 | 0.0356 | 0.6789 | 0.645 | 0.6615 | 0.4 |
| No log | 0.59 | 250 | 0.0162 | 0.9368 | 0.89 | 0.9128 | 0.7000 |
| No log | 0.59 | 250 | 0.0266 | 0.8774 | 0.93 | 0.9029 | 0.4 |
| No log | 0.59 | 250 | 0.0098 | 0.9567 | 0.995 | 0.9755 | 0.3000 |
| No log | 0.59 | 250 | 0.0315 | 0.8326 | 0.895 | 0.8627 | 0.2 |
| No log | 0.59 | 250 | 0.0347 | 0.7031 | 0.675 | 0.6888 | 0.5 |
| No log | 0.59 | 250 | 0.0702 | 0.6837 | 0.7538 | 0.7171 | 0.5 |
| No log | 0.59 | 250 | 0.0192 | 0.9057 | 0.96 | 0.9320 | 0.3000 |
| No log | 0.59 | 250 | 0.0222 | 0.8564 | 0.865 | 0.8607 | 0.6 |
| No log | 0.59 | 250 | 0.0078 | 0.9833 | 0.9833 | 0.9833 | 0.3000 |
| No log | 0.59 | 250 | 0.0132 | 0.9154 | 0.92 | 0.9177 | 0.6 |
| No log | 0.59 | 250 | 0.0306 | 0.8645 | 0.925 | 0.8937 | 0.3000 |
| No log | 0.59 | 250 | 0.0120 | 0.8829 | 0.8167 | 0.8485 | 0.3000 |
| No log | 0.59 | 250 | 0.0157 | 0.8832 | 0.945 | 0.9130 | 0.4 |
| No log | 0.59 | 250 | 0.0752 | 0.7355 | 0.89 | 0.8054 | 0.083 |
| No log | 0.59 | 250 | 0.0363 | 0.7876 | 0.7755 | 0.7815 | 0.5 |
| No log | 0.59 | 250 | 0.0039 | 0.9803 | 0.995 | 0.9876 | 0.4 |
| No log | 0.59 | 250 | 0.0714 | 0.7273 | 0.8 | 0.7619 | 0.4 |
| No log | 0.59 | 250 | 0.0349 | 0.5903 | 0.425 | 0.4942 | 0.4 |
| No log | 0.59 | 250 | 0.0230 | 0.9213 | 0.82 | 0.8677 | 0.4 |
| No log | 0.59 | 250 | 0.1112 | 0.6693 | 0.84 | 0.7450 | 0.2 |
| No log | 0.59 | 250 | 0.0728 | 0.5699 | 0.795 | 0.6639 | 0.3000 |
| No log | 0.59 | 250 | 0.0585 | 0.6872 | 0.78 | 0.7307 | 0.2 |
| No log | 0.59 | 250 | 0.1074 | 0.6908 | 0.905 | 0.7835 | 0.0530 |
| No log | 0.59 | 250 | 0.0464 | 0.7489 | 0.865 | 0.8028 | 0.4 |
| No log | 0.59 | 250 | 0.0418 | 0.8009 | 0.845 | 0.8224 | 0.4 |
| No log | 0.59 | 250 | 0.0522 | 0.5385 | 0.4221 | 0.4732 | 0.4 |
| No log | 0.59 | 250 | 0.0541 | 0.7642 | 0.81 | 0.7864 | 0.4 |
| No log | 0.59 | 250 | 0.0529 | 0.7451 | 0.6909 | 0.7170 | 0.6 |
| No log | 0.59 | 250 | 0.0394 | 0.8629 | 0.85 | 0.8564 | 0.3000 |
| No log | 0.59 | 250 | 0.0394 | 0.8629 | 0.85 | 0.8564 | 0.3000 |
| No log | 0.59 | 250 | 0.0359 | 0.8066 | 0.855 | 0.8301 | 0.4 |
| No log | 0.59 | 250 | 0.0512 | 0.7605 | 0.905 | 0.8265 | 0.2 |
| No log | 0.59 | 250 | 0.0331 | 0.8028 | 0.855 | 0.8281 | 0.3000 |
| No log | 0.59 | 250 | 0.0399 | 0.8214 | 0.805 | 0.8131 | 0.5 |
| No log | 0.59 | 250 | 0.0820 | 0.6948 | 0.74 | 0.7167 | 0.3000 |
| No log | 0.59 | 250 | 0.0471 | 0.7465 | 0.81 | 0.7770 | 0.4 |
| No log | 0.59 | 250 | 0.0470 | 0.8065 | 0.875 | 0.8393 | 0.3000 |
| No log | 0.59 | 250 | 0.1420 | 0.6685 | 0.615 | 0.6406 | 0.3000 |
| No log | 0.59 | 250 | 0.0480 | 0.8488 | 0.73 | 0.7849 | 0.6 |
| No log | 0.59 | 250 | 0.0981 | 0.6911 | 0.8543 | 0.7640 | 0.096 |
| No log | 0.59 | 250 | 0.0343 | 0.8 | 0.9 | 0.8471 | 0.3000 |
| No log | 0.59 | 250 | 0.0343 | 0.8 | 0.9 | 0.8471 | 0.3000 |
| No log | 0.59 | 250 | 0.0294 | 0.7381 | 0.6739 | 0.7045 | 0.6 |
| No log | 0.59 | 250 | 0.0294 | 0.7381 | 0.6739 | 0.7045 | 0.6 |
| No log | 0.59 | 250 | 0.0368 | 0.7287 | 0.9 | 0.8054 | 0.2 |
| No log | 0.59 | 250 | 0.0432 | 0.5343 | 0.545 | 0.5396 | 0.5 |
| No log | 0.59 | 250 | 0.0513 | 0.6364 | 0.5385 | 0.5833 | 0.6 |
| No log | 0.59 | 250 | 0.0350 | 0.7897 | 0.77 | 0.7797 | 0.6 |
| No log | 0.59 | 250 | 0.0389 | 0.6154 | 0.64 | 0.6275 | 0.5 |
| No log | 0.59 | 250 | 0.0534 | 0.6332 | 0.915 | 0.7485 | 0.096 |
| No log | 0.59 | 250 | 0.0397 | 0.7959 | 0.78 | 0.7879 | 0.6 |
| No log | 0.59 | 250 | 0.0558 | 0.7591 | 0.835 | 0.7952 | 0.4 |
| No log | 0.59 | 250 | 0.0953 | 0.3636 | 0.4615 | 0.4068 | 0.3000 |
| No log | 0.59 | 250 | 0.0784 | 0.6830 | 0.905 | 0.7785 | 0.2 |
| No log | 0.59 | 250 | 0.0542 | 0.7265 | 0.85 | 0.7834 | 0.4 |
| No log | 0.59 | 250 | 0.0685 | 0.9384 | 0.685 | 0.7919 | 0.9 |
| No log | 0.59 | 250 | 0.0746 | 0.7352 | 0.805 | 0.7685 | 0.7000 |
| No log | 0.59 | 250 | 0.0668 | 0.6236 | 0.845 | 0.7176 | 0.3000 |
| No log | 0.59 | 250 | 0.1244 | 0.8113 | 0.86 | 0.8350 | 0.2 |
| No log | 0.59 | 250 | 0.0662 | 0.6348 | 0.73 | 0.6791 | 0.0870 |
| No log | 0.59 | 250 | 0.0674 | 0.4156 | 0.665 | 0.5115 | 0.2 |
| No log | 0.59 | 250 | 0.0452 | 0.8025 | 0.955 | 0.8721 | 0.9 |
| No log | 0.59 | 250 | 0.0365 | 0.4513 | 0.765 | 0.5677 | 0.094 |
| No log | 0.59 | 250 | 0.0545 | 0.7838 | 0.87 | 0.8246 | 0.3000 |
| No log | 0.59 | 250 | 0.0701 | 0.6875 | 0.9167 | 0.7857 | 0.3000 |
| No log | 0.59 | 250 | 0.0461 | 0.7542 | 0.89 | 0.8165 | 0.3000 |
| No log | 0.59 | 250 | 0.0403 | 0.8317 | 0.865 | 0.8480 | 0.4 |
| No log | 0.59 | 250 | 0.0574 | 0.6506 | 0.81 | 0.7216 | 0.3000 |
| No log | 0.59 | 250 | 0.0474 | 0.7258 | 0.9 | 0.8036 | 0.3000 |
| No log | 0.59 | 250 | 0.0469 | 0.5407 | 0.665 | 0.5964 | 0.4 |
| No log | 0.59 | 250 | 0.0278 | 0.8732 | 0.93 | 0.9007 | 0.2 |
| No log | 0.59 | 250 | 0.0951 | 0.3683 | 0.58 | 0.4505 | 0.3000 |
| No log | 0.59 | 250 | 0.0494 | 0.7284 | 0.8894 | 0.8009 | 0.3000 |
| No log | 0.59 | 250 | 0.0923 | 0.4820 | 0.6505 | 0.5537 | 0.2 |
| No log | 0.59 | 250 | 0.0403 | 0.6170 | 0.87 | 0.7220 | 0.098 |
| No log | 0.59 | 250 | 0.0362 | 0.8762 | 0.885 | 0.8806 | 0.5 |
| No log | 0.59 | 250 | 0.0599 | 0.8436 | 0.89 | 0.8662 | 0.2 |
| No log | 0.59 | 250 | 0.0599 | 0.8436 | 0.89 | 0.8662 | 0.2 |
| No log | 0.59 | 250 | 0.0441 | 0.6895 | 0.655 | 0.6718 | 0.4 |
| No log | 0.59 | 250 | 0.0587 | 0.8052 | 0.9394 | 0.8671 | 0.3000 |
| No log | 0.59 | 250 | 0.0451 | 0.6810 | 0.7940 | 0.7332 | 0.4 |
| No log | 0.59 | 250 | 0.0545 | 0.6481 | 0.93 | 0.7639 | 0.2 |
| No log | 0.59 | 250 | 0.0452 | 0.7692 | 0.85 | 0.8076 | 0.2 |
| No log | 0.59 | 250 | 0.0403 | 0.8112 | 0.795 | 0.8030 | 0.5 |
| No log | 0.59 | 250 | 0.0507 | 0.7402 | 0.755 | 0.7475 | 0.7000 |
| No log | 0.59 | 250 | 0.0502 | 0.7288 | 0.86 | 0.7890 | 0.3000 |
| No log | 0.59 | 250 | 0.0390 | 0.8558 | 0.89 | 0.8725 | 0.4 |
| No log | 0.59 | 250 | 0.0446 | 0.7395 | 0.795 | 0.7663 | 0.4 |
| No log | 0.59 | 250 | 0.0323 | 0.8528 | 0.84 | 0.8463 | 0.4 |
| No log | 0.59 | 250 | 0.0651 | 0.7269 | 0.865 | 0.7900 | 0.2 |
| No log | 0.59 | 250 | 0.0457 | 0.4610 | 0.62 | 0.5288 | 0.2 |
| No log | 0.59 | 250 | 0.0547 | 0.5138 | 0.745 | 0.6082 | 0.4 |
| No log | 0.59 | 250 | 0.0424 | 0.8444 | 0.76 | 0.8 | 0.4 |
| No log | 0.59 | 250 | 0.0590 | 0.5836 | 0.82 | 0.6819 | 0.5 |
| No log | 0.59 | 250 | 0.0582 | 0.7085 | 0.875 | 0.7830 | 0.3000 |
| No log | 0.59 | 250 | 0.0376 | 0.7915 | 0.835 | 0.8127 | 0.4 |
| No log | 0.59 | 250 | 0.0950 | 0.5033 | 0.755 | 0.604 | 0.3000 |
| No log | 0.59 | 250 | 0.0679 | 0.8182 | 0.765 | 0.7907 | 0.5 |
| No log | 0.59 | 250 | 0.0497 | 0.6545 | 0.805 | 0.7220 | 0.5 |
| No log | 0.59 | 250 | 0.0497 | 0.6545 | 0.805 | 0.7220 | 0.5 |
| No log | 0.59 | 250 | 0.0497 | 0.6545 | 0.805 | 0.7220 | 0.5 |
| No log | 0.59 | 250 | 0.0497 | 0.6545 | 0.805 | 0.7220 | 0.5 |
| No log | 0.59 | 250 | 0.0850 | 0.5812 | 0.6869 | 0.6296 | 0.2 |
| No log | 0.59 | 250 | 0.0531 | 0.7629 | 0.7475 | 0.7551 | 0.5 |
| No log | 0.59 | 250 | 0.0163 | 0.9559 | 0.975 | 0.9653 | 0.5 |
| No log | 0.59 | 250 | 0.0020 | 0.9901 | 1.0 | 0.9950 | 0.6 |
| No log | 0.59 | 250 | 0.0033 | 0.995 | 0.995 | 0.995 | 0.7000 |
| No log | 0.59 | 250 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 0.59 | 250 | 0.0006 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 0.59 | 250 | 0.0007 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 0.59 | 250 | 0.0012 | 1.0 | 0.995 | 0.9975 | 0.9 |
| No log | 0.59 | 250 | 0.0020 | 0.9901 | 1.0 | 0.9950 | 0.6 |
| No log | 0.59 | 250 | 0.0026 | 0.9851 | 0.995 | 0.9900 | 0.3000 |
| No log | 0.59 | 250 | 0.0012 | 0.995 | 0.995 | 0.995 | 0.5 |
| No log | 0.59 | 250 | 0.0175 | 0.9347 | 0.93 | 0.9323 | 0.2 |
| No log | 0.59 | 250 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 0.59 | 250 | 0.0274 | 0.9282 | 0.84 | 0.8819 | 0.2 |
| No log | 0.59 | 250 | 0.0018 | 0.9901 | 1.0 | 0.9950 | 0.6 |
| No log | 0.59 | 250 | 0.0008 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 0.59 | 250 | 0.0032 | 0.995 | 0.995 | 0.995 | 0.8 |
| No log | 0.59 | 250 | 0.0058 | 0.9751 | 0.98 | 0.9776 | 0.8 |
| No log | 0.59 | 250 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.8 |
| No log | 0.59 | 250 | 0.0007 | 1.0 | 1.0 | 1.0 | 0.7000 |
| No log | 0.59 | 250 | 0.0034 | 0.9900 | 0.995 | 0.9925 | 0.3000 |
| No log | 0.59 | 250 | 0.0023 | 0.995 | 0.995 | 0.995 | 0.8 |
| No log | 0.59 | 250 | 0.0076 | 0.9848 | 0.97 | 0.9773 | 0.4 |
| No log | 0.59 | 250 | 0.1356 | 0.4509 | 0.505 | 0.4764 | 0.8 |
| No log | 0.59 | 250 | 0.1014 | 0.2633 | 0.5448 | 0.3551 | 0.3000 |
| No log | 0.59 | 250 | 0.1233 | 0.6832 | 0.69 | 0.6866 | 0.4 |
| No log | 0.59 | 250 | 0.1224 | 0.6552 | 0.665 | 0.6600 | 0.6 |
| No log | 1.17 | 500 | 0.0398 | 0.9624 | 0.895 | 0.9275 | 0.6 |
| No log | 1.17 | 500 | 0.0135 | 0.9072 | 0.88 | 0.8934 | 0.5 |
| No log | 1.17 | 500 | 0.0329 | 0.8738 | 0.9 | 0.8867 | 0.4 |
| No log | 1.17 | 500 | 0.0180 | 0.8682 | 0.955 | 0.9095 | 0.2 |
| No log | 1.17 | 500 | 0.0361 | 0.9482 | 0.915 | 0.9313 | 0.5 |
| No log | 1.17 | 500 | 0.0096 | 0.9802 | 0.99 | 0.9851 | 0.7000 |
| No log | 1.17 | 500 | 0.0157 | 0.9139 | 0.9598 | 0.9363 | 0.4 |
| No log | 1.17 | 500 | 0.0098 | 0.9660 | 0.995 | 0.9803 | 0.4 |
| No log | 1.17 | 500 | 0.0134 | 0.9390 | 1.0 | 0.9685 | 0.3000 |
| No log | 1.17 | 500 | 0.0326 | 0.9831 | 0.875 | 0.9259 | 0.8 |
| No log | 1.17 | 500 | 0.0092 | 0.9567 | 0.995 | 0.9755 | 0.5 |
| No log | 1.17 | 500 | 0.0139 | 0.9420 | 0.975 | 0.9582 | 0.6 |
| No log | 1.17 | 500 | 0.0072 | 0.9615 | 1.0 | 0.9804 | 0.4 |
| No log | 1.17 | 500 | 0.0197 | 0.9474 | 0.99 | 0.9682 | 0.7000 |
| No log | 1.17 | 500 | 0.0130 | 0.9519 | 0.99 | 0.9706 | 0.5 |
| No log | 1.17 | 500 | 0.0107 | 0.9426 | 0.985 | 0.9633 | 0.5 |
| No log | 1.17 | 500 | 0.0108 | 0.9512 | 0.9898 | 0.9701 | 0.6 |
| No log | 1.17 | 500 | 0.0130 | 0.9554 | 0.965 | 0.9602 | 0.7000 |
| No log | 1.17 | 500 | 0.0485 | 0.8724 | 0.8593 | 0.8658 | 0.6 |
| No log | 1.17 | 500 | 0.0152 | 0.9259 | 1.0 | 0.9615 | 0.083 |
| No log | 1.17 | 500 | 0.0115 | 0.9292 | 0.985 | 0.9563 | 0.2 |
| No log | 1.17 | 500 | 0.0400 | 0.9789 | 0.93 | 0.9538 | 0.5 |
| No log | 1.17 | 500 | 0.0051 | 0.9895 | 0.945 | 0.9668 | 0.8 |
| No log | 1.17 | 500 | 0.0064 | 0.9749 | 0.9848 | 0.9798 | 0.4 |
| No log | 1.17 | 500 | 0.0198 | 0.9563 | 0.985 | 0.9704 | 0.4 |
| No log | 1.17 | 500 | 0.0579 | 0.8929 | 0.875 | 0.8838 | 0.5 |
| No log | 1.17 | 500 | 0.0066 | 0.9314 | 0.9548 | 0.9429 | 0.5 |
| No log | 1.17 | 500 | 0.0232 | 0.8991 | 0.98 | 0.9378 | 0.3000 |
| No log | 1.17 | 500 | 0.0185 | 0.9 | 0.945 | 0.9220 | 0.5 |
| No log | 1.17 | 500 | 0.0150 | 0.9431 | 0.995 | 0.9684 | 0.08 |
| No log | 1.17 | 500 | 0.0322 | 0.9409 | 0.955 | 0.9479 | 0.4 |
| No log | 1.17 | 500 | 0.0253 | 0.9296 | 0.99 | 0.9588 | 0.5 |
| No log | 1.17 | 500 | 0.0130 | 0.9548 | 0.95 | 0.9524 | 0.4 |
| No log | 1.17 | 500 | 0.0121 | 0.9662 | 1.0 | 0.9828 | 0.3000 |
| No log | 1.17 | 500 | 0.0103 | 0.97 | 0.97 | 0.97 | 0.7000 |
| No log | 1.17 | 500 | 0.0836 | 0.8579 | 0.845 | 0.8514 | 0.3000 |
| No log | 1.17 | 500 | 0.0109 | 0.9378 | 0.98 | 0.9584 | 0.2 |
| No log | 1.17 | 500 | 0.0174 | 0.9752 | 0.985 | 0.9801 | 0.2 |
| No log | 1.17 | 500 | 0.0834 | 0.7175 | 0.8040 | 0.7583 | 0.0730 |
| No log | 1.17 | 500 | 0.0417 | 0.9534 | 0.9246 | 0.9388 | 0.3000 |
| No log | 1.17 | 500 | 0.0507 | 0.8447 | 0.925 | 0.8831 | 0.5 |
| No log | 1.17 | 500 | 0.0123 | 0.9502 | 0.955 | 0.9526 | 0.2 |
| No log | 1.17 | 500 | 0.0090 | 0.985 | 0.985 | 0.985 | 0.6 |
| No log | 1.17 | 500 | 0.0050 | 0.9747 | 0.9698 | 0.9723 | 0.5 |
| No log | 1.17 | 500 | 0.0072 | 0.9423 | 0.98 | 0.9608 | 0.4 |
| No log | 1.17 | 500 | 0.0048 | 0.965 | 0.965 | 0.965 | 0.8 |
| No log | 1.17 | 500 | 0.0096 | 0.9569 | 1.0 | 0.9780 | 0.5 |
| No log | 1.17 | 500 | 0.0338 | 0.8935 | 0.965 | 0.9279 | 0.5 |
| No log | 1.17 | 500 | 0.0145 | 0.9803 | 0.995 | 0.9876 | 0.3000 |
| No log | 1.17 | 500 | 0.0205 | 0.9701 | 0.975 | 0.9726 | 0.7000 |
| No log | 1.17 | 500 | 0.0154 | 0.98 | 0.98 | 0.98 | 0.8 |
| No log | 1.17 | 500 | 0.0060 | 0.9023 | 0.9848 | 0.9417 | 0.6 |
| No log | 1.17 | 500 | 0.0739 | 0.7833 | 0.795 | 0.7891 | 0.2 |
| No log | 1.17 | 500 | 0.0646 | 0.8565 | 0.895 | 0.8753 | 0.3000 |
| No log | 1.17 | 500 | 0.0105 | 0.9614 | 0.995 | 0.9779 | 0.6 |
| No log | 1.17 | 500 | 0.0139 | 0.9569 | 1.0 | 0.9780 | 0.2 |
| No log | 1.17 | 500 | 0.1061 | 0.8865 | 0.82 | 0.8519 | 0.4 |
| No log | 1.17 | 500 | 0.0073 | 0.9802 | 0.99 | 0.9851 | 0.6 |
| No log | 1.17 | 500 | 0.1253 | 0.8956 | 0.815 | 0.8534 | 0.6 |
| No log | 1.17 | 500 | 0.0115 | 0.9434 | 1.0 | 0.9709 | 0.5 |
| No log | 1.17 | 500 | 0.0091 | 0.9754 | 0.99 | 0.9826 | 0.5 |
| No log | 1.17 | 500 | 0.0067 | 0.96 | 0.96 | 0.96 | 0.9 |
| No log | 1.17 | 500 | 0.0509 | 0.9020 | 0.92 | 0.9109 | 0.4 |
| No log | 1.17 | 500 | 0.0068 | 0.9707 | 0.995 | 0.9827 | 0.5 |
| No log | 1.17 | 500 | 0.0121 | 0.9524 | 1.0 | 0.9756 | 0.8 |
| No log | 1.17 | 500 | 0.0091 | 0.9565 | 0.99 | 0.9730 | 0.8 |
| No log | 1.17 | 500 | 0.0151 | 0.9567 | 0.995 | 0.9755 | 0.2 |
| No log | 1.17 | 500 | 0.0080 | 0.9615 | 1.0 | 0.9804 | 0.3000 |
| No log | 1.17 | 500 | 0.0335 | 0.9480 | 0.82 | 0.8794 | 0.6 |
| No log | 1.17 | 500 | 0.0603 | 0.8673 | 0.915 | 0.8905 | 0.2 |
| No log | 1.17 | 500 | 0.0089 | 0.9282 | 0.97 | 0.9487 | 0.4 |
| No log | 1.17 | 500 | 0.0636 | 0.8374 | 0.85 | 0.8437 | 0.6 |
| No log | 1.17 | 500 | 0.0117 | 0.9479 | 1.0 | 0.9732 | 0.2 |
| No log | 1.17 | 500 | 0.0157 | 0.9387 | 0.995 | 0.9660 | 0.4 |
| No log | 1.17 | 500 | 0.0175 | 0.8911 | 0.9 | 0.8955 | 0.6 |
| No log | 1.17 | 500 | 0.0080 | 0.9447 | 0.94 | 0.9424 | 0.5 |
| No log | 1.17 | 500 | 0.0185 | 0.9429 | 0.99 | 0.9659 | 0.3000 |
| No log | 1.17 | 500 | 0.0149 | 0.9585 | 0.925 | 0.9415 | 0.6 |
| No log | 1.17 | 500 | 0.0488 | 0.9381 | 0.91 | 0.9239 | 0.3000 |
| No log | 1.17 | 500 | 0.0230 | 0.8493 | 0.93 | 0.8878 | 0.4 |
| No log | 1.17 | 500 | 0.0563 | 0.7934 | 0.8535 | 0.8224 | 0.0360 |
| No log | 1.17 | 500 | 0.0269 | 0.9554 | 0.965 | 0.9602 | 0.3000 |
| No log | 1.17 | 500 | 0.1266 | 0.8245 | 0.775 | 0.7990 | 0.5 |
| No log | 1.17 | 500 | 0.0216 | 0.8912 | 0.86 | 0.8753 | 0.6 |
| No log | 1.17 | 500 | 0.0612 | 0.8235 | 0.77 | 0.7959 | 0.5 |
| No log | 1.17 | 500 | 0.0219 | 0.8326 | 0.945 | 0.8852 | 0.3000 |
| No log | 1.17 | 500 | 0.0877 | 0.8134 | 0.85 | 0.8313 | 0.4 |
| No log | 1.17 | 500 | 0.0911 | 0.7339 | 0.91 | 0.8125 | 0.3000 |
| No log | 1.17 | 500 | 0.0649 | 0.6034 | 0.5427 | 0.5714 | 0.4 |
| No log | 1.17 | 500 | 0.0510 | 0.7863 | 0.92 | 0.8479 | 0.3000 |
| No log | 1.17 | 500 | 0.0517 | 0.8646 | 0.83 | 0.8469 | 0.6 |
| No log | 1.17 | 500 | 0.1045 | 0.75 | 0.825 | 0.7857 | 0.4 |
| No log | 1.17 | 500 | 0.0501 | 0.8153 | 0.905 | 0.8578 | 0.4 |
| No log | 1.17 | 500 | 0.0281 | 0.8676 | 0.885 | 0.8762 | 0.6 |
| No log | 1.17 | 500 | 0.0687 | 0.7626 | 0.835 | 0.7971 | 0.4 |
| No log | 1.17 | 500 | 0.0618 | 0.9158 | 0.87 | 0.8923 | 0.7000 |
| No log | 1.17 | 500 | 0.0542 | 0.8966 | 0.91 | 0.9032 | 0.6 |
| No log | 1.17 | 500 | 0.0492 | 0.8160 | 0.865 | 0.8398 | 0.5 |
| No log | 1.17 | 500 | 0.0379 | 0.8199 | 0.8693 | 0.8439 | 0.5 |
| No log | 1.17 | 500 | 0.0611 | 0.8033 | 0.735 | 0.7676 | 0.6 |
| No log | 1.17 | 500 | 0.0738 | 0.8521 | 0.72 | 0.7805 | 0.7000 |
| No log | 1.17 | 500 | 0.0587 | 0.8278 | 0.865 | 0.8460 | 0.4 |
| No log | 1.17 | 500 | 0.0404 | 0.7851 | 0.895 | 0.8364 | 0.3000 |
| No log | 1.17 | 500 | 0.1348 | 0.8066 | 0.855 | 0.8301 | 0.2 |
| No log | 1.17 | 500 | 0.0234 | 0.8833 | 0.795 | 0.8368 | 0.7000 |
| No log | 1.17 | 500 | 0.0426 | 0.7860 | 0.9137 | 0.8451 | 0.2 |
| No log | 1.17 | 500 | 0.0693 | 0.8198 | 0.91 | 0.8626 | 0.4 |
| No log | 1.17 | 500 | 0.0884 | 0.8012 | 0.6884 | 0.7405 | 0.5 |
| No log | 1.17 | 500 | 0.0239 | 0.7861 | 0.79 | 0.7880 | 0.4 |
| No log | 1.17 | 500 | 0.0901 | 0.7929 | 0.785 | 0.7889 | 0.6 |
| No log | 1.17 | 500 | 0.0367 | 0.8342 | 0.78 | 0.8062 | 0.5 |
| No log | 1.17 | 500 | 0.0672 | 0.8715 | 0.78 | 0.8232 | 0.6 |
| No log | 1.17 | 500 | 0.0703 | 0.8389 | 0.885 | 0.8613 | 0.5 |
| No log | 1.17 | 500 | 0.1127 | 0.7628 | 0.82 | 0.7904 | 0.5 |
| No log | 1.17 | 500 | 0.0777 | 0.7358 | 0.78 | 0.7573 | 0.3000 |
| No log | 1.17 | 500 | 0.0656 | 0.75 | 0.855 | 0.7991 | 0.4 |
| No log | 1.17 | 500 | 0.0498 | 0.8255 | 0.8794 | 0.8516 | 0.5 |
| No log | 1.17 | 500 | 0.1483 | 0.7183 | 0.765 | 0.7409 | 0.2 |
| No log | 1.17 | 500 | 0.0800 | 0.4370 | 0.555 | 0.4890 | 0.3000 |
| No log | 1.17 | 500 | 0.1018 | 0.9106 | 0.815 | 0.8602 | 0.6 |
| No log | 1.17 | 500 | 0.1469 | 0.4916 | 0.7387 | 0.5904 | 0.025 |
| No log | 1.17 | 500 | 0.0849 | 0.9053 | 0.7688 | 0.8315 | 0.5 |
| No log | 1.17 | 500 | 0.0896 | 0.6703 | 0.935 | 0.7808 | 0.2 |
| No log | 1.17 | 500 | 0.0276 | 0.9341 | 0.8586 | 0.8947 | 0.3000 |
| No log | 1.17 | 500 | 0.0701 | 0.8618 | 0.935 | 0.8969 | 0.4 |
| No log | 1.17 | 500 | 0.0266 | 0.8660 | 0.8442 | 0.8550 | 0.5 |
| No log | 1.17 | 500 | 0.0359 | 0.7752 | 0.845 | 0.8086 | 0.5 |
| No log | 1.17 | 500 | 0.0428 | 0.7636 | 0.63 | 0.6904 | 0.6 |
| No log | 1.17 | 500 | 0.0652 | 0.8366 | 0.845 | 0.8408 | 0.6 |
| No log | 1.17 | 500 | 0.0638 | 0.6840 | 0.79 | 0.7332 | 0.4 |
| No log | 1.17 | 500 | 0.0560 | 0.9175 | 0.945 | 0.9310 | 0.4 |
| No log | 1.17 | 500 | 0.0708 | 0.8010 | 0.785 | 0.7929 | 0.5 |
| No log | 1.17 | 500 | 0.1218 | 0.7051 | 0.825 | 0.7604 | 0.4 |
| No log | 1.17 | 500 | 0.0212 | 0.8246 | 0.7085 | 0.7622 | 0.7000 |
| No log | 1.17 | 500 | 0.1050 | 0.7208 | 0.7136 | 0.7172 | 0.2 |
| No log | 1.17 | 500 | 0.0946 | 0.8653 | 0.835 | 0.8499 | 0.5 |
| No log | 1.17 | 500 | 0.0515 | 0.8365 | 0.87 | 0.8529 | 0.6 |
| No log | 1.17 | 500 | 0.0578 | 0.8514 | 0.945 | 0.8957 | 0.4 |
| No log | 1.17 | 500 | 0.1081 | 0.84 | 0.735 | 0.7840 | 0.4 |
| No log | 1.17 | 500 | 0.0563 | 0.8594 | 0.825 | 0.8418 | 0.6 |
| No log | 1.17 | 500 | 0.1341 | 0.8220 | 0.785 | 0.8031 | 0.5 |
| No log | 1.17 | 500 | 0.0407 | 0.8317 | 0.865 | 0.8480 | 0.6 |
| No log | 1.17 | 500 | 0.0569 | 0.9061 | 0.82 | 0.8609 | 0.6 |
| No log | 1.17 | 500 | 0.0167 | 0.8844 | 0.88 | 0.8822 | 0.7000 |
| No log | 1.17 | 500 | 0.1030 | 0.704 | 0.88 | 0.7822 | 0.3000 |
| No log | 1.17 | 500 | 0.0379 | 0.8796 | 0.84 | 0.8593 | 0.7000 |
| No log | 1.17 | 500 | 0.0616 | 0.8125 | 0.845 | 0.8284 | 0.5 |
| No log | 1.17 | 500 | 0.0426 | 0.8293 | 0.85 | 0.8395 | 0.5 |
| No log | 1.17 | 500 | 0.0920 | 0.8387 | 0.78 | 0.8083 | 0.6 |
| No log | 1.17 | 500 | 0.0370 | 0.9162 | 0.875 | 0.8951 | 0.7000 |
| No log | 1.17 | 500 | 0.0719 | 0.6995 | 0.64 | 0.6684 | 0.3000 |
| No log | 1.17 | 500 | 0.1296 | 0.7042 | 0.75 | 0.7264 | 0.3000 |
| No log | 1.17 | 500 | 0.0285 | 0.8439 | 0.865 | 0.8543 | 0.4 |
| No log | 1.17 | 500 | 0.0734 | 0.7358 | 0.905 | 0.8117 | 0.4 |
| No log | 1.17 | 500 | 0.0920 | 0.8259 | 0.83 | 0.8279 | 0.5 |
| No log | 1.17 | 500 | 0.0570 | 0.8066 | 0.855 | 0.8301 | 0.6 |
| No log | 1.17 | 500 | 0.0259 | 0.8447 | 0.87 | 0.8571 | 0.6 |
| No log | 1.17 | 500 | 0.0163 | 0.8356 | 0.915 | 0.8735 | 0.3000 |
| No log | 1.17 | 500 | 0.1137 | 0.7364 | 0.81 | 0.7714 | 0.5 |
| No log | 1.17 | 500 | 0.0606 | 0.6230 | 0.76 | 0.6847 | 0.3000 |
| No log | 1.17 | 500 | 0.0823 | 0.8619 | 0.905 | 0.8829 | 0.2 |
| No log | 1.17 | 500 | 0.1016 | 0.45 | 0.675 | 0.54 | 0.0870 |
| No log | 1.17 | 500 | 0.0441 | 0.8385 | 0.8131 | 0.8256 | 0.3000 |
| No log | 1.17 | 500 | 0.0696 | 0.8473 | 0.86 | 0.8536 | 0.3000 |
| No log | 1.17 | 500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.003 |
| No log | 1.17 | 500 | 0.0121 | 0.7224 | 0.9645 | 0.8261 | 0.6 |
| No log | 1.17 | 500 | 0.0043 | 0.9643 | 0.945 | 0.9545 | 0.6 |
| No log | 1.17 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.0520 |
| No log | 1.17 | 500 | 0.0018 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.17 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.075 |
| No log | 1.17 | 500 | 0.0048 | 0.9947 | 1.0 | 0.9973 | 0.039 |
| No log | 1.17 | 500 | 0.0017 | 0.995 | 0.995 | 0.995 | 0.5 |
| No log | 1.17 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 1.17 | 500 | 0.0036 | 0.9851 | 0.995 | 0.9900 | 0.8 |
| No log | 1.17 | 500 | 0.0014 | 0.9950 | 1.0 | 0.9975 | 0.4 |
| No log | 1.17 | 500 | 0.0080 | 0.9751 | 0.98 | 0.9776 | 0.0090 |
| No log | 1.17 | 500 | 0.0132 | 0.9947 | 0.93 | 0.9612 | 0.2 |
| No log | 1.17 | 500 | 0.0010 | 1.0 | 0.995 | 0.9975 | 0.9 |
| No log | 1.17 | 500 | 0.0184 | 0.9946 | 0.915 | 0.9531 | 0.8 |
| No log | 1.17 | 500 | 0.0018 | 0.9901 | 1.0 | 0.9950 | 0.0880 |
| No log | 1.17 | 500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.008 |
| No log | 1.17 | 500 | 0.0033 | 0.99 | 0.99 | 0.99 | 0.3000 |
| No log | 1.17 | 500 | 0.0011 | 1.0 | 0.995 | 0.9975 | 0.6 |
| No log | 1.17 | 500 | 0.0045 | 0.9612 | 0.99 | 0.9754 | 0.5 |
| No log | 1.17 | 500 | 0.0386 | 0.8925 | 0.83 | 0.8601 | 0.6 |
| No log | 1.17 | 500 | 0.0005 | 0.9950 | 1.0 | 0.9975 | 0.089 |
| No log | 1.17 | 500 | 0.0022 | 1.0 | 0.98 | 0.9899 | 0.4 |
| No log | 1.17 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.17 | 500 | 0.0007 | 0.9950 | 1.0 | 0.9975 | 0.5 |
| No log | 1.17 | 500 | 0.0016 | 0.995 | 0.995 | 0.995 | 0.5 |
| No log | 1.17 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 1.17 | 500 | 0.0048 | 0.9333 | 0.98 | 0.9561 | 0.3000 |
| No log | 1.17 | 500 | 0.0018 | 0.9901 | 1.0 | 0.9950 | 0.2 |
| No log | 1.17 | 500 | 0.0163 | 0.9846 | 0.96 | 0.9722 | 0.034 |
| No log | 1.17 | 500 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.17 | 500 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 1.17 | 500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.025 |
| No log | 1.17 | 500 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.17 | 500 | 0.0022 | 0.9804 | 1.0 | 0.9901 | 0.2 |
| No log | 1.17 | 500 | 0.0007 | 0.9950 | 1.0 | 0.9975 | 0.3000 |
| No log | 1.17 | 500 | 0.0040 | 0.9792 | 1.0 | 0.9895 | 0.067 |
| No log | 1.17 | 500 | 0.0043 | 0.9840 | 0.92 | 0.9509 | 0.8 |
| No log | 1.17 | 500 | 0.0011 | 0.9950 | 1.0 | 0.9975 | 0.5 |
| No log | 1.17 | 500 | 0.0021 | 0.9852 | 1.0 | 0.9926 | 0.4 |
| No log | 1.17 | 500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 1.17 | 500 | 0.0018 | 0.9950 | 0.99 | 0.9925 | 0.7000 |
| No log | 1.17 | 500 | 0.0017 | 0.9949 | 0.985 | 0.9899 | 0.5 |
| No log | 1.17 | 500 | 0.0200 | 0.9095 | 0.9646 | 0.9363 | 0.6 |
| No log | 1.17 | 500 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.8 |
| No log | 1.17 | 500 | 0.0029 | 0.9949 | 0.985 | 0.9899 | 0.9 |
| No log | 1.17 | 500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.004 |
| No log | 1.17 | 500 | 0.0004 | 0.9950 | 1.0 | 0.9975 | 0.3000 |
| No log | 1.17 | 500 | 0.0126 | 0.9522 | 0.995 | 0.9731 | 0.048 |
| No log | 1.17 | 500 | 0.0006 | 0.9917 | 1.0 | 0.9959 | 0.0090 |
| No log | 1.17 | 500 | 0.0146 | 0.9179 | 0.895 | 0.9063 | 0.5 |
| No log | 1.17 | 500 | 0.0035 | 0.9950 | 1.0 | 0.9975 | 0.035 |
| No log | 1.17 | 500 | 0.0013 | 0.9852 | 1.0 | 0.9926 | 0.6 |
| No log | 1.17 | 500 | 0.0016 | 0.9901 | 1.0 | 0.9950 | 0.3000 |
| No log | 1.17 | 500 | 0.0017 | 0.995 | 0.995 | 0.995 | 0.7000 |
| No log | 1.17 | 500 | 0.0108 | 0.7773 | 0.995 | 0.8728 | 0.0370 |
| No log | 1.17 | 500 | 0.0058 | 1.0 | 0.99 | 0.9950 | 0.012 |
| No log | 1.17 | 500 | 0.0059 | 0.9948 | 0.965 | 0.9797 | 0.8 |
| No log | 1.17 | 500 | 0.0150 | 0.7078 | 0.8731 | 0.7818 | 0.5 |
| No log | 1.17 | 500 | 0.0070 | 0.9175 | 0.945 | 0.9310 | 0.4 |
| No log | 1.17 | 500 | 0.0445 | 0.89 | 0.89 | 0.89 | 0.5 |
| No log | 1.17 | 500 | 0.1451 | 0.6562 | 0.75 | 0.7 | 0.093 |
| No log | 1.17 | 500 | 0.0068 | 0.9356 | 0.945 | 0.9403 | 0.6 |
| No log | 1.17 | 500 | 0.0848 | 0.8298 | 0.8298 | 0.8298 | 0.5 |
| No log | 1.17 | 500 | 0.0286 | 0.8507 | 0.94 | 0.8931 | 0.3000 |
| No log | 1.17 | 500 | 0.0276 | 0.8241 | 0.89 | 0.8558 | 0.3000 |
| No log | 1.17 | 500 | 0.0253 | 0.8785 | 0.94 | 0.9082 | 0.3000 |
| No log | 1.17 | 500 | 0.0263 | 0.8986 | 0.93 | 0.9140 | 0.6 |
| No log | 1.17 | 500 | 0.0221 | 0.9171 | 0.885 | 0.9008 | 0.4 |
| No log | 1.17 | 500 | 0.0328 | 0.8811 | 0.815 | 0.8468 | 0.6 |
| No log | 1.17 | 500 | 0.0190 | 0.85 | 0.85 | 0.85 | 0.5 |
| No log | 1.17 | 500 | 0.0393 | 0.7887 | 0.84 | 0.8136 | 0.4 |
| No log | 1.17 | 500 | 0.0500 | 0.835 | 0.835 | 0.835 | 0.4 |
| No log | 1.17 | 500 | 0.0026 | 0.9852 | 1.0 | 0.9926 | 0.0860 |
| No log | 1.17 | 500 | 0.0326 | 0.8173 | 0.85 | 0.8333 | 0.5 |
| No log | 1.17 | 500 | 0.0262 | 0.8230 | 0.86 | 0.8411 | 0.4 |
| No log | 1.17 | 500 | 0.0280 | 0.8290 | 0.8081 | 0.8184 | 0.4 |
| No log | 1.17 | 500 | 0.0670 | 0.6941 | 0.7638 | 0.7273 | 0.4 |
| No log | 1.17 | 500 | 0.0241 | 0.8883 | 0.795 | 0.8391 | 0.4 |
| No log | 1.17 | 500 | 0.0177 | 0.9072 | 0.88 | 0.8934 | 0.4 |
| No log | 1.17 | 500 | 0.0075 | 0.9461 | 0.965 | 0.9554 | 0.3000 |
| No log | 1.17 | 500 | 0.0321 | 0.7892 | 0.805 | 0.7970 | 0.6 |
| No log | 1.17 | 500 | 0.0315 | 0.8122 | 0.865 | 0.8378 | 0.3000 |
| No log | 1.17 | 500 | 0.0288 | 0.8702 | 0.905 | 0.8873 | 0.5 |
| No log | 1.17 | 500 | 0.0278 | 0.8476 | 0.695 | 0.7637 | 0.7000 |
| No log | 1.17 | 500 | 0.0287 | 0.8238 | 0.865 | 0.8439 | 0.4 |
| No log | 1.17 | 500 | 0.0295 | 0.9565 | 0.88 | 0.9167 | 0.3000 |
| No log | 1.17 | 500 | 0.0628 | 0.7487 | 0.73 | 0.7392 | 0.4 |
| No log | 1.17 | 500 | 0.0120 | 0.985 | 0.985 | 0.985 | 0.3000 |
| No log | 1.17 | 500 | 0.0010 | 0.9901 | 1.0 | 0.9950 | 0.083 |
| No log | 1.17 | 500 | 0.0157 | 0.96 | 0.96 | 0.96 | 0.3000 |
| No log | 1.17 | 500 | 0.0430 | 0.7917 | 0.76 | 0.7755 | 0.4 |
| No log | 1.17 | 500 | 0.0453 | 0.7756 | 0.795 | 0.7852 | 0.5 |
| No log | 1.17 | 500 | 0.1325 | 0.5231 | 0.7234 | 0.6071 | 0.2 |
| No log | 1.17 | 500 | 0.0349 | 0.7151 | 0.665 | 0.6891 | 0.4 |
| No log | 1.17 | 500 | 0.0164 | 0.9282 | 0.905 | 0.9165 | 0.6 |
| No log | 1.17 | 500 | 0.0237 | 0.9038 | 0.94 | 0.9216 | 0.4 |
| No log | 1.17 | 500 | 0.0036 | 0.9900 | 0.995 | 0.9925 | 0.3000 |
| No log | 1.17 | 500 | 0.0294 | 0.8706 | 0.875 | 0.8728 | 0.3000 |
| No log | 1.17 | 500 | 0.0335 | 0.6728 | 0.73 | 0.7002 | 0.5 |
| No log | 1.17 | 500 | 0.0662 | 0.6996 | 0.8 | 0.7464 | 0.4 |
| No log | 1.17 | 500 | 0.0199 | 0.9175 | 0.945 | 0.9310 | 0.3000 |
| No log | 1.17 | 500 | 0.0218 | 0.865 | 0.865 | 0.865 | 0.6 |
| No log | 1.17 | 500 | 0.0044 | 0.9836 | 1.0 | 0.9917 | 0.2 |
| No log | 1.17 | 500 | 0.0119 | 0.9476 | 0.905 | 0.9258 | 0.6 |
| No log | 1.17 | 500 | 0.0291 | 0.8841 | 0.915 | 0.8993 | 0.3000 |
| No log | 1.17 | 500 | 0.0091 | 0.9083 | 0.9083 | 0.9083 | 0.3000 |
| No log | 1.17 | 500 | 0.0158 | 0.8981 | 0.925 | 0.9113 | 0.4 |
| No log | 1.17 | 500 | 0.0683 | 0.8009 | 0.845 | 0.8224 | 0.2 |
| No log | 1.17 | 500 | 0.0368 | 0.7018 | 0.8163 | 0.7547 | 0.3000 |
| No log | 1.17 | 500 | 0.0034 | 0.9756 | 1.0 | 0.9877 | 0.3000 |
| No log | 1.17 | 500 | 0.0740 | 0.7857 | 0.77 | 0.7778 | 0.5 |
| No log | 1.17 | 500 | 0.0368 | 0.7282 | 0.375 | 0.4950 | 0.6 |
| No log | 1.17 | 500 | 0.0239 | 0.8989 | 0.845 | 0.8711 | 0.3000 |
| No log | 1.17 | 500 | 0.1159 | 0.7014 | 0.775 | 0.7363 | 0.4 |
| No log | 1.17 | 500 | 0.0751 | 0.5584 | 0.86 | 0.6772 | 0.2 |
| No log | 1.17 | 500 | 0.0656 | 0.6299 | 0.8 | 0.7048 | 0.1 |
| No log | 1.17 | 500 | 0.1003 | 0.7154 | 0.905 | 0.7991 | 0.064 |
| No log | 1.17 | 500 | 0.0434 | 0.76 | 0.855 | 0.8047 | 0.4 |
| No log | 1.17 | 500 | 0.0400 | 0.7788 | 0.88 | 0.8263 | 0.3000 |
| No log | 1.17 | 500 | 0.0465 | 0.5217 | 0.5427 | 0.5320 | 0.3000 |
| No log | 1.17 | 500 | 0.0524 | 0.7938 | 0.77 | 0.7817 | 0.4 |
| No log | 1.17 | 500 | 0.0505 | 0.7778 | 0.7636 | 0.7706 | 0.6 |
| No log | 1.17 | 500 | 0.0388 | 0.8194 | 0.93 | 0.8712 | 0.2 |
| No log | 1.17 | 500 | 0.0388 | 0.8194 | 0.93 | 0.8712 | 0.2 |
| No log | 1.17 | 500 | 0.0363 | 0.8068 | 0.835 | 0.8206 | 0.4 |
| No log | 1.17 | 500 | 0.0502 | 0.8309 | 0.86 | 0.8452 | 0.4 |
| No log | 1.17 | 500 | 0.0344 | 0.7739 | 0.89 | 0.8279 | 0.2 |
| No log | 1.17 | 500 | 0.0394 | 0.8182 | 0.81 | 0.8141 | 0.5 |
| No log | 1.17 | 500 | 0.0855 | 0.6681 | 0.795 | 0.7260 | 0.2 |
| No log | 1.17 | 500 | 0.0449 | 0.7835 | 0.76 | 0.7716 | 0.5 |
| No log | 1.17 | 500 | 0.0423 | 0.8161 | 0.91 | 0.8605 | 0.3000 |
| No log | 1.17 | 500 | 0.1496 | 0.6425 | 0.665 | 0.6536 | 0.2 |
| No log | 1.17 | 500 | 0.0487 | 0.8307 | 0.785 | 0.8072 | 0.5 |
| No log | 1.17 | 500 | 0.1076 | 0.7179 | 0.8442 | 0.7760 | 0.092 |
| No log | 1.17 | 500 | 0.0332 | 0.86 | 0.86 | 0.8600 | 0.4 |
| No log | 1.17 | 500 | 0.0332 | 0.86 | 0.86 | 0.8600 | 0.4 |
| No log | 1.17 | 500 | 0.0265 | 0.8043 | 0.8043 | 0.8043 | 0.5 |
| No log | 1.17 | 500 | 0.0265 | 0.8043 | 0.8043 | 0.8043 | 0.5 |
| No log | 1.17 | 500 | 0.0368 | 0.8137 | 0.83 | 0.8218 | 0.3000 |
| No log | 1.17 | 500 | 0.0423 | 0.5543 | 0.485 | 0.5173 | 0.5 |
| No log | 1.17 | 500 | 0.0511 | 0.75 | 0.4615 | 0.5714 | 0.7000 |
| No log | 1.17 | 500 | 0.0332 | 0.7387 | 0.82 | 0.7773 | 0.5 |
| No log | 1.17 | 500 | 0.0372 | 0.6548 | 0.645 | 0.6499 | 0.5 |
| No log | 1.17 | 500 | 0.0505 | 0.7131 | 0.845 | 0.7735 | 0.2 |
| No log | 1.17 | 500 | 0.0383 | 0.7864 | 0.81 | 0.7980 | 0.6 |
| No log | 1.17 | 500 | 0.0563 | 0.7671 | 0.84 | 0.8019 | 0.4 |
| No log | 1.17 | 500 | 0.0991 | 0.4272 | 0.4231 | 0.4251 | 0.3000 |
| No log | 1.17 | 500 | 0.0792 | 0.7333 | 0.825 | 0.7765 | 0.3000 |
| No log | 1.17 | 500 | 0.0523 | 0.7333 | 0.88 | 0.8 | 0.3000 |
| No log | 1.17 | 500 | 0.0913 | 0.7784 | 0.755 | 0.7665 | 0.8 |
| No log | 1.17 | 500 | 0.1089 | 0.6964 | 0.86 | 0.7696 | 0.4 |
| No log | 1.17 | 500 | 0.0702 | 0.6508 | 0.82 | 0.7257 | 0.3000 |
| No log | 1.17 | 500 | 0.1226 | 0.7676 | 0.925 | 0.8390 | 0.063 |
| No log | 1.17 | 500 | 0.1045 | 0.5249 | 0.685 | 0.5944 | 0.0260 |
| No log | 1.17 | 500 | 0.0664 | 0.5 | 0.575 | 0.5349 | 0.3000 |
| No log | 1.17 | 500 | 0.0686 | 0.7869 | 0.96 | 0.8649 | 0.9 |
| No log | 1.17 | 500 | 0.0368 | 0.5989 | 0.56 | 0.5788 | 0.2 |
| No log | 1.17 | 500 | 0.0556 | 0.8 | 0.86 | 0.8289 | 0.3000 |
| No log | 1.17 | 500 | 0.0615 | 0.6471 | 0.9167 | 0.7586 | 0.2 |
| No log | 1.17 | 500 | 0.0465 | 0.7554 | 0.88 | 0.8129 | 0.3000 |
| No log | 1.17 | 500 | 0.0405 | 0.8169 | 0.87 | 0.8426 | 0.4 |
| No log | 1.17 | 500 | 0.0623 | 0.7019 | 0.73 | 0.7157 | 0.3000 |
| No log | 1.17 | 500 | 0.0486 | 0.7810 | 0.82 | 0.8 | 0.4 |
| No log | 1.17 | 500 | 0.0480 | 0.5637 | 0.575 | 0.5693 | 0.5 |
| No log | 1.17 | 500 | 0.0290 | 0.8688 | 0.96 | 0.9121 | 0.098 |
| No log | 1.17 | 500 | 0.0970 | 0.4194 | 0.52 | 0.4643 | 0.5 |
| No log | 1.17 | 500 | 0.0513 | 0.7925 | 0.8442 | 0.8175 | 0.4 |
| No log | 1.17 | 500 | 0.0983 | 0.6667 | 0.4854 | 0.5618 | 0.4 |
| No log | 1.17 | 500 | 0.0582 | 0.5820 | 0.745 | 0.6535 | 0.1 |
| No log | 1.17 | 500 | 0.0387 | 0.8634 | 0.885 | 0.8741 | 0.5 |
| No log | 1.17 | 500 | 0.0582 | 0.8424 | 0.855 | 0.8486 | 0.3000 |
| No log | 1.17 | 500 | 0.0582 | 0.8424 | 0.855 | 0.8486 | 0.3000 |
| No log | 1.17 | 500 | 0.0432 | 0.6129 | 0.76 | 0.6786 | 0.2 |
| No log | 1.17 | 500 | 0.0626 | 0.8153 | 0.9141 | 0.8619 | 0.4 |
| No log | 1.17 | 500 | 0.0468 | 0.6681 | 0.7588 | 0.7106 | 0.4 |
| No log | 1.17 | 500 | 0.0531 | 0.7511 | 0.83 | 0.7886 | 0.4 |
| No log | 1.17 | 500 | 0.0462 | 0.7961 | 0.82 | 0.8079 | 0.3000 |
| No log | 1.17 | 500 | 0.0398 | 0.7447 | 0.875 | 0.8046 | 0.2 |
| No log | 1.17 | 500 | 0.0500 | 0.755 | 0.755 | 0.755 | 0.7000 |
| No log | 1.17 | 500 | 0.0513 | 0.7805 | 0.8 | 0.7901 | 0.4 |
| No log | 1.17 | 500 | 0.0376 | 0.8402 | 0.92 | 0.8783 | 0.3000 |
| No log | 1.17 | 500 | 0.0478 | 0.7824 | 0.755 | 0.7684 | 0.5 |
| No log | 1.17 | 500 | 0.0306 | 0.8865 | 0.82 | 0.8519 | 0.5 |
| No log | 1.17 | 500 | 0.0631 | 0.7617 | 0.815 | 0.7874 | 0.3000 |
| No log | 1.17 | 500 | 0.0463 | 0.5 | 0.625 | 0.5556 | 0.2 |
| No log | 1.17 | 500 | 0.0563 | 0.5103 | 0.745 | 0.6057 | 0.4 |
| No log | 1.17 | 500 | 0.0443 | 0.7682 | 0.845 | 0.8048 | 0.2 |
| No log | 1.17 | 500 | 0.0644 | 0.5904 | 0.8 | 0.6794 | 0.6 |
| No log | 1.17 | 500 | 0.0595 | 0.7328 | 0.85 | 0.7870 | 0.3000 |
| No log | 1.17 | 500 | 0.0389 | 0.7717 | 0.845 | 0.8067 | 0.3000 |
| No log | 1.17 | 500 | 0.1053 | 0.5017 | 0.73 | 0.5947 | 0.3000 |
| No log | 1.17 | 500 | 0.0697 | 0.8071 | 0.795 | 0.8010 | 0.5 |
| No log | 1.17 | 500 | 0.0487 | 0.6523 | 0.835 | 0.7325 | 0.4 |
| No log | 1.17 | 500 | 0.0487 | 0.6523 | 0.835 | 0.7325 | 0.4 |
| No log | 1.17 | 500 | 0.0487 | 0.6523 | 0.835 | 0.7325 | 0.4 |
| No log | 1.17 | 500 | 0.0487 | 0.6523 | 0.835 | 0.7325 | 0.4 |
| No log | 1.17 | 500 | 0.1022 | 0.5931 | 0.6111 | 0.6020 | 0.2 |
| No log | 1.17 | 500 | 0.0560 | 0.7217 | 0.8384 | 0.7757 | 0.4 |
| No log | 1.17 | 500 | 0.0189 | 0.9327 | 0.97 | 0.9510 | 0.3000 |
| No log | 1.17 | 500 | 0.0020 | 0.9901 | 1.0 | 0.9950 | 0.3000 |
| No log | 1.17 | 500 | 0.0028 | 0.995 | 0.995 | 0.995 | 0.5 |
| No log | 1.17 | 500 | 0.0003 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 1.17 | 500 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.4 |
| No log | 1.17 | 500 | 0.0005 | 1.0 | 0.995 | 0.9975 | 0.4 |
| No log | 1.17 | 500 | 0.0006 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 1.17 | 500 | 0.0036 | 0.99 | 0.99 | 0.99 | 0.9 |
| No log | 1.17 | 500 | 0.0009 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 1.17 | 500 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.5 |
| No log | 1.17 | 500 | 0.0185 | 0.9786 | 0.915 | 0.9457 | 0.4 |
| No log | 1.17 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.17 | 500 | 0.0330 | 0.8973 | 0.83 | 0.8623 | 0.2 |
| No log | 1.17 | 500 | 0.0017 | 0.9901 | 1.0 | 0.9950 | 0.3000 |
| No log | 1.17 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.4 |
| No log | 1.17 | 500 | 0.0038 | 0.99 | 0.99 | 0.99 | 0.9 |
| No log | 1.17 | 500 | 0.0057 | 0.9703 | 0.98 | 0.9751 | 0.7000 |
| No log | 1.17 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.9 |
| No log | 1.17 | 500 | 0.0004 | 0.9950 | 1.0 | 0.9975 | 0.3000 |
| No log | 1.17 | 500 | 0.0025 | 0.9900 | 0.995 | 0.9925 | 0.2 |
| No log | 1.17 | 500 | 0.0017 | 0.9950 | 1.0 | 0.9975 | 0.7000 |
| No log | 1.17 | 500 | 0.0097 | 0.9695 | 0.955 | 0.9622 | 0.3000 |
| No log | 1.17 | 500 | 0.1400 | 0.4292 | 0.47 | 0.4487 | 0.8 |
| No log | 1.17 | 500 | 0.1016 | 0.2529 | 0.5931 | 0.3546 | 0.2 |
| No log | 1.17 | 500 | 0.1260 | 0.5954 | 0.78 | 0.6753 | 0.2 |
| No log | 1.17 | 500 | 0.1194 | 0.5873 | 0.74 | 0.6549 | 0.4 |
| No log | 1.76 | 750 | 0.0404 | 0.9728 | 0.895 | 0.9323 | 0.6 |
| No log | 1.76 | 750 | 0.0125 | 0.9235 | 0.905 | 0.9141 | 0.6 |
| No log | 1.76 | 750 | 0.0329 | 0.8545 | 0.91 | 0.8814 | 0.3000 |
| No log | 1.76 | 750 | 0.0157 | 0.8930 | 0.96 | 0.9253 | 0.2 |
| No log | 1.76 | 750 | 0.0348 | 0.9474 | 0.9 | 0.9231 | 0.5 |
| No log | 1.76 | 750 | 0.0094 | 0.9754 | 0.99 | 0.9826 | 0.7000 |
| No log | 1.76 | 750 | 0.0140 | 0.9588 | 0.9347 | 0.9466 | 0.7000 |
| No log | 1.76 | 750 | 0.0102 | 0.98 | 0.98 | 0.98 | 0.6 |
| No log | 1.76 | 750 | 0.0131 | 0.9476 | 0.995 | 0.9707 | 0.7000 |
| No log | 1.76 | 750 | 0.0294 | 0.9126 | 0.94 | 0.9261 | 0.3000 |
| No log | 1.76 | 750 | 0.0082 | 0.9662 | 1.0 | 0.9828 | 0.5 |
| No log | 1.76 | 750 | 0.0131 | 0.9415 | 0.965 | 0.9531 | 0.6 |
| No log | 1.76 | 750 | 0.0071 | 0.9615 | 1.0 | 0.9804 | 0.4 |
| No log | 1.76 | 750 | 0.0192 | 0.9522 | 0.995 | 0.9731 | 0.7000 |
| No log | 1.76 | 750 | 0.0138 | 0.9517 | 0.985 | 0.9681 | 0.6 |
| No log | 1.76 | 750 | 0.0102 | 0.9384 | 0.99 | 0.9635 | 0.5 |
| No log | 1.76 | 750 | 0.0098 | 0.9797 | 0.9797 | 0.9797 | 0.9 |
| No log | 1.76 | 750 | 0.0123 | 0.9336 | 0.985 | 0.9586 | 0.5 |
| No log | 1.76 | 750 | 0.0446 | 0.9043 | 0.8543 | 0.8786 | 0.7000 |
| No log | 1.76 | 750 | 0.0163 | 0.9259 | 1.0 | 0.9615 | 0.069 |
| No log | 1.76 | 750 | 0.0124 | 0.9299 | 0.995 | 0.9614 | 0.065 |
| No log | 1.76 | 750 | 0.0489 | 0.9592 | 0.94 | 0.9495 | 0.2 |
| No log | 1.76 | 750 | 0.0046 | 1.0 | 0.945 | 0.9717 | 0.7000 |
| No log | 1.76 | 750 | 0.0064 | 0.9846 | 0.9746 | 0.9796 | 0.3000 |
| No log | 1.76 | 750 | 0.0188 | 0.9476 | 0.995 | 0.9707 | 0.2 |
| No log | 1.76 | 750 | 0.0541 | 0.8844 | 0.88 | 0.8822 | 0.4 |
| No log | 1.76 | 750 | 0.0062 | 0.9190 | 0.9698 | 0.9438 | 0.4 |
| No log | 1.76 | 750 | 0.0214 | 0.9320 | 0.96 | 0.9458 | 0.6 |
| No log | 1.76 | 750 | 0.0160 | 0.9314 | 0.95 | 0.9406 | 0.5 |
| No log | 1.76 | 750 | 0.0153 | 0.9476 | 0.995 | 0.9707 | 0.083 |
| No log | 1.76 | 750 | 0.0317 | 0.9412 | 0.96 | 0.9505 | 0.3000 |
| No log | 1.76 | 750 | 0.0255 | 0.9336 | 0.985 | 0.9586 | 0.5 |
| No log | 1.76 | 750 | 0.0152 | 0.9409 | 0.955 | 0.9479 | 0.2 |
| No log | 1.76 | 750 | 0.0111 | 0.9709 | 1.0 | 0.9852 | 0.3000 |
| No log | 1.76 | 750 | 0.0106 | 0.97 | 0.97 | 0.97 | 0.4 |
| No log | 1.76 | 750 | 0.0793 | 0.8684 | 0.825 | 0.8462 | 0.4 |
| No log | 1.76 | 750 | 0.0102 | 0.9378 | 0.98 | 0.9584 | 0.2 |
| No log | 1.76 | 750 | 0.0183 | 0.98 | 0.98 | 0.98 | 0.3000 |
| No log | 1.76 | 750 | 0.1075 | 0.6990 | 0.6884 | 0.6937 | 0.089 |
| No log | 1.76 | 750 | 0.0407 | 0.9485 | 0.9246 | 0.9364 | 0.2 |
| No log | 1.76 | 750 | 0.0508 | 0.8274 | 0.935 | 0.8779 | 0.4 |
| No log | 1.76 | 750 | 0.0113 | 0.9645 | 0.95 | 0.9572 | 0.2 |
| No log | 1.76 | 750 | 0.0101 | 0.9756 | 1.0 | 0.9877 | 0.2 |
| No log | 1.76 | 750 | 0.0055 | 0.97 | 0.9749 | 0.9724 | 0.5 |
| No log | 1.76 | 750 | 0.0066 | 0.9559 | 0.975 | 0.9653 | 0.6 |
| No log | 1.76 | 750 | 0.0039 | 0.9519 | 0.99 | 0.9706 | 0.6 |
| No log | 1.76 | 750 | 0.0097 | 0.9569 | 1.0 | 0.9780 | 0.6 |
| No log | 1.76 | 750 | 0.0322 | 0.8930 | 0.96 | 0.9253 | 0.5 |
| No log | 1.76 | 750 | 0.0133 | 0.9804 | 1.0 | 0.9901 | 0.2 |
| No log | 1.76 | 750 | 0.0250 | 0.9563 | 0.985 | 0.9704 | 0.3000 |
| No log | 1.76 | 750 | 0.0157 | 0.9847 | 0.965 | 0.9747 | 0.9 |
| No log | 1.76 | 750 | 0.0045 | 0.9366 | 0.9746 | 0.9552 | 0.6 |
| No log | 1.76 | 750 | 0.0824 | 0.7308 | 0.855 | 0.7880 | 0.0880 |
| No log | 1.76 | 750 | 0.0654 | 0.8599 | 0.89 | 0.8747 | 0.2 |
| No log | 1.76 | 750 | 0.0104 | 0.9660 | 0.995 | 0.9803 | 0.6 |
| No log | 1.76 | 750 | 0.0148 | 0.9524 | 1.0 | 0.9756 | 0.067 |
| No log | 1.76 | 750 | 0.0991 | 0.8984 | 0.84 | 0.8682 | 0.3000 |
| No log | 1.76 | 750 | 0.0069 | 0.9709 | 1.0 | 0.9852 | 0.3000 |
| No log | 1.76 | 750 | 0.1156 | 0.9353 | 0.795 | 0.8595 | 0.7000 |
| No log | 1.76 | 750 | 0.0117 | 0.9565 | 0.99 | 0.9730 | 0.8 |
| No log | 1.76 | 750 | 0.0094 | 0.9660 | 0.995 | 0.9803 | 0.3000 |
| No log | 1.76 | 750 | 0.0074 | 0.9598 | 0.955 | 0.9574 | 0.9 |
| No log | 1.76 | 750 | 0.0493 | 0.8990 | 0.935 | 0.9167 | 0.4 |
| No log | 1.76 | 750 | 0.0071 | 0.9660 | 0.995 | 0.9803 | 0.2 |
| No log | 1.76 | 750 | 0.0115 | 0.9614 | 0.995 | 0.9779 | 0.8 |
| No log | 1.76 | 750 | 0.0095 | 0.9429 | 0.99 | 0.9659 | 0.8 |
| No log | 1.76 | 750 | 0.0146 | 0.9567 | 0.995 | 0.9755 | 0.076 |
| No log | 1.76 | 750 | 0.0078 | 0.9709 | 1.0 | 0.9852 | 0.5 |
| No log | 1.76 | 750 | 0.0307 | 0.9344 | 0.855 | 0.8930 | 0.5 |
| No log | 1.76 | 750 | 0.0535 | 0.9031 | 0.885 | 0.8939 | 0.4 |
| No log | 1.76 | 750 | 0.0094 | 0.9282 | 0.97 | 0.9487 | 0.2 |
| No log | 1.76 | 750 | 0.0607 | 0.7906 | 0.925 | 0.8525 | 0.4 |
| No log | 1.76 | 750 | 0.0112 | 0.9479 | 1.0 | 0.9732 | 0.054 |
| No log | 1.76 | 750 | 0.0169 | 0.9648 | 0.96 | 0.9624 | 0.8 |
| No log | 1.76 | 750 | 0.0157 | 0.8597 | 0.95 | 0.9026 | 0.5 |
| No log | 1.76 | 750 | 0.0074 | 0.9406 | 0.95 | 0.9453 | 0.3000 |
| No log | 1.76 | 750 | 0.0185 | 0.9517 | 0.985 | 0.9681 | 0.4 |
| No log | 1.76 | 750 | 0.0135 | 0.9543 | 0.94 | 0.9471 | 0.4 |
| No log | 1.76 | 750 | 0.0519 | 0.9531 | 0.915 | 0.9337 | 0.2 |
| No log | 1.76 | 750 | 0.0223 | 0.8319 | 0.94 | 0.8826 | 0.2 |
| No log | 1.76 | 750 | 0.0676 | 0.7434 | 0.8485 | 0.7925 | 0.015 |
| No log | 1.76 | 750 | 0.0264 | 0.96 | 0.96 | 0.96 | 0.2 |
| No log | 1.76 | 750 | 0.1184 | 0.8019 | 0.83 | 0.8157 | 0.3000 |
| No log | 1.76 | 750 | 0.0199 | 0.8812 | 0.89 | 0.8856 | 0.5 |
| No log | 1.76 | 750 | 0.0644 | 0.7681 | 0.795 | 0.7813 | 0.4 |
| No log | 1.76 | 750 | 0.0214 | 0.8806 | 0.885 | 0.8828 | 0.4 |
| No log | 1.76 | 750 | 0.0724 | 0.8442 | 0.84 | 0.8421 | 0.3000 |
| No log | 1.76 | 750 | 0.0876 | 0.7848 | 0.875 | 0.8274 | 0.4 |
| No log | 1.76 | 750 | 0.0605 | 0.5897 | 0.5779 | 0.5838 | 0.4 |
| No log | 1.76 | 750 | 0.0508 | 0.7922 | 0.915 | 0.8492 | 0.3000 |
| No log | 1.76 | 750 | 0.0460 | 0.8364 | 0.895 | 0.8647 | 0.4 |
| No log | 1.76 | 750 | 0.0955 | 0.7522 | 0.865 | 0.8047 | 0.3000 |
| No log | 1.76 | 750 | 0.0437 | 0.8607 | 0.865 | 0.8628 | 0.6 |
| No log | 1.76 | 750 | 0.0255 | 0.8719 | 0.885 | 0.8784 | 0.5 |
| No log | 1.76 | 750 | 0.0650 | 0.7216 | 0.92 | 0.8088 | 0.2 |
| No log | 1.76 | 750 | 0.0583 | 0.9115 | 0.875 | 0.8929 | 0.6 |
| No log | 1.76 | 750 | 0.0549 | 0.9040 | 0.895 | 0.8995 | 0.6 |
| No log | 1.76 | 750 | 0.0462 | 0.7713 | 0.86 | 0.8132 | 0.4 |
| No log | 1.76 | 750 | 0.0340 | 0.8009 | 0.8894 | 0.8429 | 0.4 |
| No log | 1.76 | 750 | 0.0608 | 0.7013 | 0.81 | 0.7517 | 0.4 |
| No log | 1.76 | 750 | 0.0697 | 0.75 | 0.825 | 0.7857 | 0.5 |
| No log | 1.76 | 750 | 0.0547 | 0.8462 | 0.88 | 0.8627 | 0.4 |
| No log | 1.76 | 750 | 0.0434 | 0.8482 | 0.81 | 0.8286 | 0.5 |
| No log | 1.76 | 750 | 0.1335 | 0.8116 | 0.84 | 0.8256 | 0.2 |
| No log | 1.76 | 750 | 0.0240 | 0.8953 | 0.77 | 0.8280 | 0.7000 |
| No log | 1.76 | 750 | 0.0379 | 0.8947 | 0.8629 | 0.8786 | 0.3000 |
| No log | 1.76 | 750 | 0.0696 | 0.8585 | 0.88 | 0.8691 | 0.4 |
| No log | 1.76 | 750 | 0.0798 | 0.7240 | 0.8040 | 0.7619 | 0.3000 |
| No log | 1.76 | 750 | 0.0235 | 0.7933 | 0.825 | 0.8088 | 0.3000 |
| No log | 1.76 | 750 | 0.0809 | 0.7887 | 0.84 | 0.8136 | 0.5 |
| No log | 1.76 | 750 | 0.0347 | 0.8071 | 0.795 | 0.8010 | 0.4 |
| No log | 1.76 | 750 | 0.0643 | 0.7629 | 0.885 | 0.8194 | 0.3000 |
| No log | 1.76 | 750 | 0.0710 | 0.8358 | 0.84 | 0.8379 | 0.5 |
| No log | 1.76 | 750 | 0.1096 | 0.7913 | 0.815 | 0.8030 | 0.5 |
| No log | 1.76 | 750 | 0.0757 | 0.8167 | 0.735 | 0.7737 | 0.4 |
| No log | 1.76 | 750 | 0.0617 | 0.7840 | 0.835 | 0.8087 | 0.4 |
| No log | 1.76 | 750 | 0.0502 | 0.7712 | 0.9146 | 0.8368 | 0.3000 |
| No log | 1.76 | 750 | 0.1509 | 0.6026 | 0.925 | 0.7298 | 0.035 |
| No log | 1.76 | 750 | 0.0777 | 0.472 | 0.59 | 0.5244 | 0.3000 |
| No log | 1.76 | 750 | 0.0977 | 0.8901 | 0.85 | 0.8696 | 0.4 |
| No log | 1.76 | 750 | 0.2090 | 0.3256 | 0.8442 | 0.4699 | 0.002 |
| No log | 1.76 | 750 | 0.0802 | 0.8902 | 0.7739 | 0.8280 | 0.4 |
| No log | 1.76 | 750 | 0.0825 | 0.7804 | 0.835 | 0.8068 | 0.5 |
| No log | 1.76 | 750 | 0.0247 | 0.9358 | 0.8838 | 0.9091 | 0.3000 |
| No log | 1.76 | 750 | 0.0693 | 0.8905 | 0.935 | 0.9122 | 0.3000 |
| No log | 1.76 | 750 | 0.0263 | 0.8731 | 0.8643 | 0.8687 | 0.5 |
| No log | 1.76 | 750 | 0.0314 | 0.8413 | 0.795 | 0.8175 | 0.6 |
| No log | 1.76 | 750 | 0.0409 | 0.6844 | 0.77 | 0.7247 | 0.4 |
| No log | 1.76 | 750 | 0.0626 | 0.8485 | 0.84 | 0.8442 | 0.6 |
| No log | 1.76 | 750 | 0.0607 | 0.6820 | 0.815 | 0.7426 | 0.4 |
| No log | 1.76 | 750 | 0.0648 | 0.9175 | 0.945 | 0.9310 | 0.3000 |
| No log | 1.76 | 750 | 0.0606 | 0.8293 | 0.85 | 0.8395 | 0.5 |
| No log | 1.76 | 750 | 0.1217 | 0.7069 | 0.82 | 0.7593 | 0.4 |
| No log | 1.76 | 750 | 0.0208 | 0.8333 | 0.7538 | 0.7916 | 0.7000 |
| No log | 1.76 | 750 | 0.1449 | 0.5784 | 0.7789 | 0.6638 | 0.048 |
| No log | 1.76 | 750 | 0.0940 | 0.8842 | 0.84 | 0.8615 | 0.5 |
| No log | 1.76 | 750 | 0.0492 | 0.8 | 0.9 | 0.8471 | 0.4 |
| No log | 1.76 | 750 | 0.0610 | 0.8551 | 0.915 | 0.8841 | 0.4 |
| No log | 1.76 | 750 | 0.0945 | 0.8247 | 0.8 | 0.8122 | 0.3000 |
| No log | 1.76 | 750 | 0.0541 | 0.9029 | 0.79 | 0.8427 | 0.7000 |
| No log | 1.76 | 750 | 0.1256 | 0.8667 | 0.78 | 0.8211 | 0.5 |
| No log | 1.76 | 750 | 0.0367 | 0.8551 | 0.885 | 0.8698 | 0.6 |
| No log | 1.76 | 750 | 0.0566 | 0.8821 | 0.86 | 0.8709 | 0.5 |
| No log | 1.76 | 750 | 0.0169 | 0.8706 | 0.875 | 0.8728 | 0.6 |
| No log | 1.76 | 750 | 0.0930 | 0.716 | 0.895 | 0.7956 | 0.3000 |
| No log | 1.76 | 750 | 0.0373 | 0.8219 | 0.9 | 0.8592 | 0.5 |
| No log | 1.76 | 750 | 0.0591 | 0.8279 | 0.89 | 0.8578 | 0.4 |
| No log | 1.76 | 750 | 0.0366 | 0.8796 | 0.84 | 0.8593 | 0.6 |
| No log | 1.76 | 750 | 0.0839 | 0.8299 | 0.805 | 0.8173 | 0.5 |
| No log | 1.76 | 750 | 0.0345 | 0.9086 | 0.895 | 0.9018 | 0.6 |
| No log | 1.76 | 750 | 0.0666 | 0.6256 | 0.71 | 0.6651 | 0.2 |
| No log | 1.76 | 750 | 0.1225 | 0.7861 | 0.68 | 0.7292 | 0.5 |
| No log | 1.76 | 750 | 0.0279 | 0.8730 | 0.825 | 0.8483 | 0.4 |
| No log | 1.76 | 750 | 0.0679 | 0.7725 | 0.9 | 0.8314 | 0.4 |
| No log | 1.76 | 750 | 0.0876 | 0.7617 | 0.895 | 0.8230 | 0.3000 |
| No log | 1.76 | 750 | 0.0518 | 0.8009 | 0.885 | 0.8409 | 0.5 |
| No log | 1.76 | 750 | 0.0227 | 0.8731 | 0.86 | 0.8665 | 0.6 |
| No log | 1.76 | 750 | 0.0171 | 0.8451 | 0.9 | 0.8717 | 0.3000 |
| No log | 1.76 | 750 | 0.1085 | 0.8010 | 0.765 | 0.7826 | 0.6 |
| No log | 1.76 | 750 | 0.0577 | 0.6376 | 0.73 | 0.6807 | 0.3000 |
| No log | 1.76 | 750 | 0.0764 | 0.8520 | 0.95 | 0.8983 | 0.0720 |
| No log | 1.76 | 750 | 0.1073 | 0.4710 | 0.61 | 0.5316 | 0.085 |
| No log | 1.76 | 750 | 0.0469 | 0.7325 | 0.8990 | 0.8073 | 0.096 |
| No log | 1.76 | 750 | 0.0669 | 0.8967 | 0.825 | 0.8594 | 0.4 |
| No log | 1.76 | 750 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.002 |
| No log | 1.76 | 750 | 0.0115 | 0.7619 | 0.8934 | 0.8224 | 0.7000 |
| No log | 1.76 | 750 | 0.0043 | 0.9238 | 0.97 | 0.9463 | 0.5 |
| No log | 1.76 | 750 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.006 |
| No log | 1.76 | 750 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.0440 |
| No log | 1.76 | 750 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.049 |
| No log | 1.76 | 750 | 0.0027 | 0.9947 | 1.0 | 0.9973 | 0.069 |
| No log | 1.76 | 750 | 0.0013 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 1.76 | 750 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 1.76 | 750 | 0.0029 | 0.995 | 0.995 | 0.995 | 0.8 |
| No log | 1.76 | 750 | 0.0008 | 1.0 | 1.0 | 1.0 | 0.5 |
| No log | 1.76 | 750 | 0.0083 | 0.975 | 0.975 | 0.975 | 0.017 |
| No log | 1.76 | 750 | 0.0140 | 0.9946 | 0.925 | 0.9585 | 0.2 |
| No log | 1.76 | 750 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.76 | 750 | 0.0151 | 0.9689 | 0.935 | 0.9517 | 0.3000 |
| No log | 1.76 | 750 | 0.0013 | 0.9950 | 1.0 | 0.9975 | 0.0510 |
| No log | 1.76 | 750 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.017 |
| No log | 1.76 | 750 | 0.0034 | 0.9949 | 0.985 | 0.9899 | 0.2 |
| No log | 1.76 | 750 | 0.0012 | 1.0 | 0.985 | 0.9924 | 0.8 |
| No log | 1.76 | 750 | 0.0032 | 0.9614 | 0.995 | 0.9779 | 0.2 |
| No log | 1.76 | 750 | 0.0372 | 0.9162 | 0.82 | 0.8654 | 0.6 |
| No log | 1.76 | 750 | 0.0007 | 0.9950 | 1.0 | 0.9975 | 0.038 |
| No log | 1.76 | 750 | 0.0018 | 1.0 | 0.98 | 0.9899 | 0.3000 |
| No log | 1.76 | 750 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.017 |
| No log | 1.76 | 750 | 0.0005 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 1.76 | 750 | 0.0013 | 0.995 | 0.995 | 0.995 | 0.4 |
| No log | 1.76 | 750 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.76 | 750 | 0.0046 | 0.9333 | 0.98 | 0.9561 | 0.3000 |
| No log | 1.76 | 750 | 0.0019 | 0.9901 | 1.0 | 0.9950 | 0.049 |
| No log | 1.76 | 750 | 0.0154 | 0.9846 | 0.96 | 0.9722 | 0.0370 |
| No log | 1.76 | 750 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.76 | 750 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.004 |
| No log | 1.76 | 750 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.0090 |
| No log | 1.76 | 750 | 0.0008 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.76 | 750 | 0.0023 | 0.9852 | 1.0 | 0.9926 | 0.2 |
| No log | 1.76 | 750 | 0.0005 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 1.76 | 750 | 0.0038 | 0.9792 | 1.0 | 0.9895 | 0.3000 |
| No log | 1.76 | 750 | 0.0038 | 0.9174 | 1.0 | 0.9569 | 0.093 |
| No log | 1.76 | 750 | 0.0037 | 0.9804 | 1.0 | 0.9901 | 0.4 |
| No log | 1.76 | 750 | 0.0013 | 0.995 | 0.995 | 0.995 | 0.4 |
| No log | 1.76 | 750 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 1.76 | 750 | 0.0006 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 1.76 | 750 | 0.0019 | 0.9949 | 0.985 | 0.9899 | 0.2 |
| No log | 1.76 | 750 | 0.0176 | 0.9275 | 0.9697 | 0.9481 | 0.7000 |
| No log | 1.76 | 750 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 1.76 | 750 | 0.0019 | 0.9900 | 0.995 | 0.9925 | 0.6 |
| No log | 1.76 | 750 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.007 |
| No log | 1.76 | 750 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.76 | 750 | 0.0147 | 0.9390 | 1.0 | 0.9685 | 0.0140 |
| No log | 1.76 | 750 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 1.76 | 750 | 0.0147 | 0.9474 | 0.9 | 0.9231 | 0.5 |
| No log | 1.76 | 750 | 0.0020 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 1.76 | 750 | 0.0006 | 0.9950 | 1.0 | 0.9975 | 0.6 |
| No log | 1.76 | 750 | 0.0012 | 0.9901 | 1.0 | 0.9950 | 0.2 |
| No log | 1.76 | 750 | 0.0007 | 0.9950 | 1.0 | 0.9975 | 0.5 |
| No log | 1.76 | 750 | 0.0110 | 0.7787 | 0.985 | 0.8698 | 0.039 |
| No log | 1.76 | 750 | 0.0070 | 1.0 | 0.99 | 0.9950 | 0.0090 |
| No log | 1.76 | 750 | 0.0060 | 0.9704 | 0.985 | 0.9777 | 0.097 |
| No log | 1.76 | 750 | 0.0164 | 0.7285 | 0.8173 | 0.7703 | 0.6 |
| No log | 1.76 | 750 | 0.0071 | 0.9091 | 0.95 | 0.9291 | 0.4 |
| No log | 1.76 | 750 | 0.0358 | 0.9227 | 0.895 | 0.9086 | 0.5 |
| No log | 1.76 | 750 | 0.1316 | 0.6324 | 0.7679 | 0.6935 | 0.096 |
| No log | 1.76 | 750 | 0.0057 | 0.9330 | 0.975 | 0.9535 | 0.4 |
| No log | 1.76 | 750 | 0.0870 | 0.7854 | 0.8564 | 0.8193 | 0.3000 |
| No log | 1.76 | 750 | 0.0280 | 0.8486 | 0.925 | 0.8852 | 0.3000 |
| No log | 1.76 | 750 | 0.0276 | 0.9477 | 0.815 | 0.8763 | 0.6 |
| No log | 1.76 | 750 | 0.0259 | 0.8889 | 0.92 | 0.9042 | 0.3000 |
| No log | 1.76 | 750 | 0.0252 | 0.8767 | 0.96 | 0.9165 | 0.4 |
| No log | 1.76 | 750 | 0.0236 | 0.9301 | 0.865 | 0.8964 | 0.5 |
| No log | 1.76 | 750 | 0.0321 | 0.875 | 0.84 | 0.8571 | 0.4 |
| No log | 1.76 | 750 | 0.0192 | 0.8325 | 0.845 | 0.8387 | 0.5 |
| No log | 1.76 | 750 | 0.0392 | 0.8531 | 0.755 | 0.8011 | 0.6 |
| No log | 1.76 | 750 | 0.0475 | 0.8208 | 0.87 | 0.8447 | 0.3000 |
| No log | 1.76 | 750 | 0.0024 | 0.9950 | 1.0 | 0.9975 | 0.054 |
| No log | 1.76 | 750 | 0.0321 | 0.8152 | 0.86 | 0.8370 | 0.5 |
| No log | 1.76 | 750 | 0.0257 | 0.8082 | 0.885 | 0.8449 | 0.3000 |
| No log | 1.76 | 750 | 0.0267 | 0.8325 | 0.8283 | 0.8304 | 0.4 |
| No log | 1.76 | 750 | 0.0650 | 0.6822 | 0.8090 | 0.7402 | 0.3000 |
| No log | 1.76 | 750 | 0.0239 | 0.8624 | 0.815 | 0.8380 | 0.4 |
| No log | 1.76 | 750 | 0.0189 | 0.8558 | 0.92 | 0.8867 | 0.2 |
| No log | 1.76 | 750 | 0.0062 | 0.9552 | 0.96 | 0.9576 | 0.4 |
| No log | 1.76 | 750 | 0.0308 | 0.7763 | 0.85 | 0.8115 | 0.5 |
| No log | 1.76 | 750 | 0.0308 | 0.7991 | 0.895 | 0.8443 | 0.2 |
| No log | 1.76 | 750 | 0.0294 | 0.8894 | 0.885 | 0.8872 | 0.5 |
| No log | 1.76 | 750 | 0.0243 | 0.9078 | 0.64 | 0.7507 | 0.7000 |
| No log | 1.76 | 750 | 0.0271 | 0.8447 | 0.87 | 0.8571 | 0.4 |
| No log | 1.76 | 750 | 0.0273 | 0.9381 | 0.91 | 0.9239 | 0.2 |
| No log | 1.76 | 750 | 0.0632 | 0.7083 | 0.765 | 0.7356 | 0.3000 |
| No log | 1.76 | 750 | 0.0107 | 0.9802 | 0.99 | 0.9851 | 0.2 |
| No log | 1.76 | 750 | 0.0008 | 0.9901 | 1.0 | 0.9950 | 0.046 |
| No log | 1.76 | 750 | 0.0153 | 0.96 | 0.96 | 0.96 | 0.3000 |
| No log | 1.76 | 750 | 0.0437 | 0.7558 | 0.82 | 0.7866 | 0.2 |
| No log | 1.76 | 750 | 0.0435 | 0.7477 | 0.83 | 0.7867 | 0.4 |
| No log | 1.76 | 750 | 0.1208 | 0.5965 | 0.7234 | 0.6538 | 0.3000 |
| No log | 1.76 | 750 | 0.0332 | 0.8411 | 0.635 | 0.7236 | 0.5 |
| No log | 1.76 | 750 | 0.0122 | 0.9394 | 0.93 | 0.9347 | 0.5 |
| No log | 1.76 | 750 | 0.0245 | 0.8744 | 0.94 | 0.9060 | 0.3000 |
| No log | 1.76 | 750 | 0.0043 | 0.9949 | 0.98 | 0.9874 | 0.7000 |
| No log | 1.76 | 750 | 0.0251 | 0.8934 | 0.88 | 0.8866 | 0.4 |
| No log | 1.76 | 750 | 0.0317 | 0.6609 | 0.77 | 0.7113 | 0.4 |
| No log | 1.76 | 750 | 0.0646 | 0.73 | 0.7487 | 0.7392 | 0.4 |
| No log | 1.76 | 750 | 0.0195 | 0.9293 | 0.92 | 0.9246 | 0.4 |
| No log | 1.76 | 750 | 0.0199 | 0.8769 | 0.855 | 0.8658 | 0.6 |
| No log | 1.76 | 750 | 0.0065 | 0.9833 | 0.9833 | 0.9833 | 0.3000 |
| No log | 1.76 | 750 | 0.0117 | 0.9436 | 0.92 | 0.9316 | 0.6 |
| No log | 1.76 | 750 | 0.0315 | 0.9062 | 0.87 | 0.8878 | 0.4 |
| No log | 1.76 | 750 | 0.0063 | 0.9569 | 0.925 | 0.9407 | 0.5 |
| No log | 1.76 | 750 | 0.0160 | 0.9154 | 0.92 | 0.9177 | 0.4 |
| No log | 1.76 | 750 | 0.0672 | 0.8438 | 0.81 | 0.8265 | 0.3000 |
| No log | 1.76 | 750 | 0.0361 | 0.7914 | 0.7551 | 0.7728 | 0.5 |
| No log | 1.76 | 750 | 0.0036 | 0.9804 | 1.0 | 0.9901 | 0.3000 |
| No log | 1.76 | 750 | 0.0739 | 0.75 | 0.78 | 0.7647 | 0.4 |
| No log | 1.76 | 750 | 0.0345 | 0.4492 | 0.575 | 0.5044 | 0.2 |
| No log | 1.76 | 750 | 0.0241 | 0.8844 | 0.88 | 0.8822 | 0.2 |
| No log | 1.76 | 750 | 0.1105 | 0.6986 | 0.765 | 0.7303 | 0.4 |
| No log | 1.76 | 750 | 0.0745 | 0.6509 | 0.69 | 0.6699 | 0.6 |
| No log | 1.76 | 750 | 0.0700 | 0.6098 | 0.805 | 0.6940 | 0.077 |
| No log | 1.76 | 750 | 0.1006 | 0.7184 | 0.88 | 0.7910 | 0.084 |
| No log | 1.76 | 750 | 0.0416 | 0.7262 | 0.915 | 0.8097 | 0.3000 |
| No log | 1.76 | 750 | 0.0382 | 0.7617 | 0.895 | 0.8230 | 0.3000 |
| No log | 1.76 | 750 | 0.0455 | 0.4688 | 0.6030 | 0.5275 | 0.3000 |
| No log | 1.76 | 750 | 0.0526 | 0.7442 | 0.8 | 0.7711 | 0.3000 |
| No log | 1.76 | 750 | 0.0478 | 0.7049 | 0.7818 | 0.7414 | 0.4 |
| No log | 1.76 | 750 | 0.0412 | 0.8431 | 0.86 | 0.8515 | 0.3000 |
| No log | 1.76 | 750 | 0.0412 | 0.8431 | 0.86 | 0.8515 | 0.3000 |
| No log | 1.76 | 750 | 0.0387 | 0.7963 | 0.86 | 0.8269 | 0.3000 |
| No log | 1.76 | 750 | 0.0522 | 0.8204 | 0.845 | 0.8325 | 0.3000 |
| No log | 1.76 | 750 | 0.0335 | 0.7686 | 0.88 | 0.8205 | 0.2 |
| No log | 1.76 | 750 | 0.0387 | 0.8394 | 0.81 | 0.8244 | 0.5 |
| No log | 1.76 | 750 | 0.0848 | 0.7268 | 0.745 | 0.7358 | 0.3000 |
| No log | 1.76 | 750 | 0.0451 | 0.7119 | 0.84 | 0.7706 | 0.3000 |
| No log | 1.76 | 750 | 0.0430 | 0.8008 | 0.945 | 0.8670 | 0.2 |
| No log | 1.76 | 750 | 0.1563 | 0.6537 | 0.67 | 0.6617 | 0.2 |
| No log | 1.76 | 750 | 0.0510 | 0.8187 | 0.745 | 0.7801 | 0.5 |
| No log | 1.76 | 750 | 0.1078 | 0.6967 | 0.8543 | 0.7675 | 0.081 |
| No log | 1.76 | 750 | 0.0362 | 0.8333 | 0.875 | 0.8537 | 0.3000 |
| No log | 1.76 | 750 | 0.0362 | 0.8333 | 0.875 | 0.8537 | 0.3000 |
| No log | 1.76 | 750 | 0.0266 | 0.7115 | 0.8043 | 0.7551 | 0.4 |
| No log | 1.76 | 750 | 0.0266 | 0.7115 | 0.8043 | 0.7551 | 0.4 |
| No log | 1.76 | 750 | 0.0368 | 0.8602 | 0.8 | 0.8290 | 0.4 |
| No log | 1.76 | 750 | 0.0419 | 0.6159 | 0.465 | 0.5299 | 0.6 |
| No log | 1.76 | 750 | 0.0575 | 0.44 | 0.8462 | 0.5789 | 0.092 |
| No log | 1.76 | 750 | 0.0347 | 0.75 | 0.795 | 0.7718 | 0.5 |
| No log | 1.76 | 750 | 0.0350 | 0.5811 | 0.77 | 0.6624 | 0.3000 |
| No log | 1.76 | 750 | 0.0516 | 0.7087 | 0.815 | 0.7581 | 0.2 |
| No log | 1.76 | 750 | 0.0381 | 0.8020 | 0.79 | 0.7960 | 0.6 |
| No log | 1.76 | 750 | 0.0581 | 0.7189 | 0.895 | 0.7973 | 0.2 |
| No log | 1.76 | 750 | 0.0994 | 0.4487 | 0.3365 | 0.3846 | 0.3000 |
| No log | 1.76 | 750 | 0.0792 | 0.7078 | 0.86 | 0.7765 | 0.2 |
| No log | 1.76 | 750 | 0.0518 | 0.7604 | 0.825 | 0.7914 | 0.5 |
| No log | 1.76 | 750 | 0.0853 | 0.8021 | 0.75 | 0.7752 | 0.8 |
| No log | 1.76 | 750 | 0.1053 | 0.6865 | 0.865 | 0.7655 | 0.4 |
| No log | 1.76 | 750 | 0.0675 | 0.7040 | 0.785 | 0.7423 | 0.4 |
| No log | 1.76 | 750 | 0.1260 | 0.7845 | 0.91 | 0.8426 | 0.091 |
| No log | 1.76 | 750 | 0.1234 | 0.4711 | 0.57 | 0.5158 | 0.035 |
| No log | 1.76 | 750 | 0.0631 | 0.5297 | 0.58 | 0.5537 | 0.4 |
| No log | 1.76 | 750 | 0.0702 | 0.7901 | 0.96 | 0.8668 | 0.8 |
| No log | 1.76 | 750 | 0.0452 | 0.4925 | 0.66 | 0.5641 | 0.058 |
| No log | 1.76 | 750 | 0.0561 | 0.8009 | 0.865 | 0.8317 | 0.3000 |
| No log | 1.76 | 750 | 0.0616 | 0.6471 | 0.9167 | 0.7586 | 0.3000 |
| No log | 1.76 | 750 | 0.0469 | 0.7305 | 0.935 | 0.8202 | 0.2 |
| No log | 1.76 | 750 | 0.0403 | 0.8520 | 0.835 | 0.8434 | 0.4 |
| No log | 1.76 | 750 | 0.0628 | 0.6581 | 0.77 | 0.7097 | 0.2 |
| No log | 1.76 | 750 | 0.0482 | 0.8 | 0.8 | 0.8000 | 0.4 |
| No log | 1.76 | 750 | 0.0491 | 0.5471 | 0.61 | 0.5768 | 0.5 |
| No log | 1.76 | 750 | 0.0275 | 0.8832 | 0.945 | 0.9130 | 0.2 |
| No log | 1.76 | 750 | 0.0909 | 0.4534 | 0.535 | 0.4908 | 0.5 |
| No log | 1.76 | 750 | 0.0480 | 0.7723 | 0.8693 | 0.8180 | 0.3000 |
| No log | 1.76 | 750 | 0.1040 | 0.6024 | 0.4854 | 0.5376 | 0.3000 |
| No log | 1.76 | 750 | 0.0661 | 0.5290 | 0.73 | 0.6134 | 0.0530 |
| No log | 1.76 | 750 | 0.0369 | 0.8333 | 0.9 | 0.8654 | 0.4 |
| No log | 1.76 | 750 | 0.0623 | 0.8131 | 0.87 | 0.8406 | 0.2 |
| No log | 1.76 | 750 | 0.0623 | 0.8131 | 0.87 | 0.8406 | 0.2 |
| No log | 1.76 | 750 | 0.0432 | 0.6074 | 0.735 | 0.6652 | 0.2 |
| No log | 1.76 | 750 | 0.0624 | 0.8097 | 0.9242 | 0.8632 | 0.3000 |
| No log | 1.76 | 750 | 0.0444 | 0.6584 | 0.8040 | 0.7240 | 0.3000 |
| No log | 1.76 | 750 | 0.0525 | 0.7811 | 0.785 | 0.7830 | 0.5 |
| No log | 1.76 | 750 | 0.0460 | 0.8079 | 0.82 | 0.8139 | 0.3000 |
| No log | 1.76 | 750 | 0.0413 | 0.7742 | 0.84 | 0.8058 | 0.3000 |
| No log | 1.76 | 750 | 0.0480 | 0.6759 | 0.855 | 0.7550 | 0.5 |
| No log | 1.76 | 750 | 0.0482 | 0.7306 | 0.895 | 0.8045 | 0.2 |
| No log | 1.76 | 750 | 0.0406 | 0.8271 | 0.885 | 0.8551 | 0.3000 |
| No log | 1.76 | 750 | 0.0474 | 0.7692 | 0.75 | 0.7595 | 0.5 |
| No log | 1.76 | 750 | 0.0317 | 0.8989 | 0.8 | 0.8466 | 0.5 |
| No log | 1.76 | 750 | 0.0639 | 0.7729 | 0.8 | 0.7862 | 0.3000 |
| No log | 1.76 | 750 | 0.0465 | 0.4549 | 0.655 | 0.5369 | 0.2 |
| No log | 1.76 | 750 | 0.0562 | 0.5804 | 0.65 | 0.6132 | 0.6 |
| No log | 1.76 | 750 | 0.0519 | 0.6873 | 0.89 | 0.7756 | 0.066 |
| No log | 1.76 | 750 | 0.0605 | 0.6062 | 0.785 | 0.6841 | 0.6 |
| No log | 1.76 | 750 | 0.0591 | 0.7692 | 0.8 | 0.7843 | 0.5 |
| No log | 1.76 | 750 | 0.0371 | 0.7723 | 0.865 | 0.8160 | 0.3000 |
| No log | 1.76 | 750 | 0.0988 | 0.5036 | 0.705 | 0.5875 | 0.3000 |
| No log | 1.76 | 750 | 0.0685 | 0.7751 | 0.81 | 0.7922 | 0.4 |
| No log | 1.76 | 750 | 0.0479 | 0.6842 | 0.715 | 0.6993 | 0.5 |
| No log | 1.76 | 750 | 0.0479 | 0.6842 | 0.715 | 0.6993 | 0.5 |
| No log | 1.76 | 750 | 0.0479 | 0.6842 | 0.715 | 0.6993 | 0.5 |
| No log | 1.76 | 750 | 0.0479 | 0.6842 | 0.715 | 0.6993 | 0.5 |
| No log | 1.76 | 750 | 0.1159 | 0.5061 | 0.6313 | 0.5618 | 0.089 |
| No log | 1.76 | 750 | 0.0546 | 0.7113 | 0.8586 | 0.7780 | 0.4 |
| No log | 1.76 | 750 | 0.0185 | 0.9242 | 0.975 | 0.9489 | 0.2 |
| No log | 1.76 | 750 | 0.0019 | 0.9901 | 1.0 | 0.9950 | 0.2 |
| No log | 1.76 | 750 | 0.0026 | 1.0 | 0.995 | 0.9975 | 0.3000 |
| No log | 1.76 | 750 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.0870 |
| No log | 1.76 | 750 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.0880 |
| No log | 1.76 | 750 | 0.0003 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 1.76 | 750 | 0.0006 | 0.9950 | 1.0 | 0.9975 | 0.0370 |
| No log | 1.76 | 750 | 0.0024 | 0.9900 | 0.995 | 0.9925 | 0.7000 |
| No log | 1.76 | 750 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.4 |
| No log | 1.76 | 750 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 1.76 | 750 | 0.0184 | 0.9890 | 0.9 | 0.9424 | 0.3000 |
| No log | 1.76 | 750 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.0510 |
| No log | 1.76 | 750 | 0.0316 | 0.9011 | 0.82 | 0.8586 | 0.2 |
| No log | 1.76 | 750 | 0.0017 | 0.9901 | 1.0 | 0.9950 | 0.046 |
| No log | 1.76 | 750 | 0.0003 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 1.76 | 750 | 0.0030 | 0.995 | 0.995 | 0.995 | 0.9 |
| No log | 1.76 | 750 | 0.0051 | 0.9703 | 0.98 | 0.9751 | 0.6 |
| No log | 1.76 | 750 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 1.76 | 750 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.064 |
| No log | 1.76 | 750 | 0.0020 | 0.995 | 0.995 | 0.995 | 0.4 |
| No log | 1.76 | 750 | 0.0008 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 1.76 | 750 | 0.0134 | 0.9579 | 0.91 | 0.9333 | 0.2 |
| No log | 1.76 | 750 | 0.1140 | 0.3783 | 0.575 | 0.4563 | 0.6 |
| No log | 1.76 | 750 | 0.0968 | 0.3534 | 0.3241 | 0.3381 | 0.4 |
| No log | 1.76 | 750 | 0.1203 | 0.6667 | 0.69 | 0.6781 | 0.3000 |
| No log | 1.76 | 750 | 0.1112 | 0.5761 | 0.795 | 0.6681 | 0.3000 |
| No log | 2.34 | 1000 | 0.0411 | 0.9730 | 0.9 | 0.9351 | 0.5 |
| No log | 2.34 | 1000 | 0.0119 | 0.8942 | 0.93 | 0.9118 | 0.3000 |
| No log | 2.34 | 1000 | 0.0340 | 0.8872 | 0.865 | 0.8759 | 0.5 |
| No log | 2.34 | 1000 | 0.0162 | 0.8722 | 0.99 | 0.9274 | 0.056 |
| No log | 2.34 | 1000 | 0.0391 | 0.9479 | 0.91 | 0.9286 | 0.4 |
| No log | 2.34 | 1000 | 0.0095 | 0.9802 | 0.99 | 0.9851 | 0.7000 |
| No log | 2.34 | 1000 | 0.0145 | 0.9187 | 0.9648 | 0.9412 | 0.4 |
| No log | 2.34 | 1000 | 0.0110 | 0.9752 | 0.985 | 0.9801 | 0.3000 |
| No log | 2.34 | 1000 | 0.0118 | 0.9431 | 0.995 | 0.9684 | 0.4 |
| No log | 2.34 | 1000 | 0.0313 | 0.9568 | 0.885 | 0.9195 | 0.6 |
| No log | 2.34 | 1000 | 0.0078 | 0.9615 | 1.0 | 0.9804 | 0.3000 |
| No log | 2.34 | 1000 | 0.0135 | 0.9369 | 0.965 | 0.9507 | 0.5 |
| No log | 2.34 | 1000 | 0.0071 | 0.9569 | 1.0 | 0.9780 | 0.2 |
| No log | 2.34 | 1000 | 0.0182 | 0.9522 | 0.995 | 0.9731 | 0.7000 |
| No log | 2.34 | 1000 | 0.0142 | 0.9259 | 1.0 | 0.9615 | 0.0220 |
| No log | 2.34 | 1000 | 0.0110 | 0.9259 | 1.0 | 0.9615 | 0.068 |
| No log | 2.34 | 1000 | 0.0098 | 0.9747 | 0.9797 | 0.9772 | 0.9 |
| No log | 2.34 | 1000 | 0.0112 | 0.9648 | 0.96 | 0.9624 | 0.7000 |
| No log | 2.34 | 1000 | 0.0462 | 0.8472 | 0.9196 | 0.8819 | 0.5 |
| No log | 2.34 | 1000 | 0.0169 | 0.9259 | 1.0 | 0.9615 | 0.0190 |
| No log | 2.34 | 1000 | 0.0121 | 0.9299 | 0.995 | 0.9614 | 0.028 |
| No log | 2.34 | 1000 | 0.0485 | 0.9502 | 0.955 | 0.9526 | 0.066 |
| No log | 2.34 | 1000 | 0.0048 | 1.0 | 0.94 | 0.9691 | 0.7000 |
| No log | 2.34 | 1000 | 0.0061 | 0.9897 | 0.9797 | 0.9847 | 0.2 |
| No log | 2.34 | 1000 | 0.0187 | 0.9474 | 0.99 | 0.9682 | 0.2 |
| No log | 2.34 | 1000 | 0.0500 | 0.9444 | 0.85 | 0.8947 | 0.6 |
| No log | 2.34 | 1000 | 0.0070 | 0.9275 | 0.9648 | 0.9458 | 0.3000 |
| No log | 2.34 | 1000 | 0.0221 | 0.9151 | 0.97 | 0.9417 | 0.4 |
| No log | 2.34 | 1000 | 0.0163 | 0.9479 | 0.91 | 0.9286 | 0.6 |
| No log | 2.34 | 1000 | 0.0152 | 0.9522 | 0.995 | 0.9731 | 0.2 |
| No log | 2.34 | 1000 | 0.0317 | 0.9502 | 0.955 | 0.9526 | 0.5 |
| No log | 2.34 | 1000 | 0.0258 | 0.9469 | 0.98 | 0.9631 | 0.5 |
| No log | 2.34 | 1000 | 0.0158 | 0.9245 | 0.98 | 0.9515 | 0.083 |
| No log | 2.34 | 1000 | 0.0116 | 0.9662 | 1.0 | 0.9828 | 0.2 |
| No log | 2.34 | 1000 | 0.0111 | 0.9563 | 0.985 | 0.9704 | 0.2 |
| No log | 2.34 | 1000 | 0.0768 | 0.9101 | 0.81 | 0.8571 | 0.4 |
| No log | 2.34 | 1000 | 0.0099 | 0.9378 | 0.98 | 0.9584 | 0.2 |
| No log | 2.34 | 1000 | 0.0137 | 0.9851 | 0.99 | 0.9875 | 0.3000 |
| No log | 2.34 | 1000 | 0.1184 | 0.6931 | 0.7035 | 0.6983 | 0.061 |
| No log | 2.34 | 1000 | 0.0410 | 0.9310 | 0.9497 | 0.9403 | 0.0860 |
| No log | 2.34 | 1000 | 0.0510 | 0.8311 | 0.935 | 0.88 | 0.4 |
| No log | 2.34 | 1000 | 0.0120 | 0.9466 | 0.975 | 0.9606 | 0.067 |
| No log | 2.34 | 1000 | 0.0108 | 0.9803 | 0.995 | 0.9876 | 0.5 |
| No log | 2.34 | 1000 | 0.0056 | 0.9896 | 0.9598 | 0.9745 | 0.7000 |
| No log | 2.34 | 1000 | 0.0066 | 0.9604 | 0.97 | 0.9652 | 0.6 |
| No log | 2.34 | 1000 | 0.0039 | 0.97 | 0.97 | 0.97 | 0.8 |
| No log | 2.34 | 1000 | 0.0096 | 0.9569 | 1.0 | 0.9780 | 0.4 |
| No log | 2.34 | 1000 | 0.0322 | 0.8837 | 0.95 | 0.9157 | 0.4 |
| No log | 2.34 | 1000 | 0.0119 | 0.9852 | 1.0 | 0.9926 | 0.3000 |
| No log | 2.34 | 1000 | 0.0249 | 0.9655 | 0.98 | 0.9727 | 0.8 |
| No log | 2.34 | 1000 | 0.0150 | 0.975 | 0.975 | 0.975 | 0.8 |
| No log | 2.34 | 1000 | 0.0043 | 0.9455 | 0.9695 | 0.9574 | 0.7000 |
| No log | 2.34 | 1000 | 0.1138 | 0.6936 | 0.815 | 0.7494 | 0.054 |
| No log | 2.34 | 1000 | 0.0629 | 0.9048 | 0.855 | 0.8792 | 0.5 |
| No log | 2.34 | 1000 | 0.0107 | 0.9660 | 0.995 | 0.9803 | 0.5 |
| No log | 2.34 | 1000 | 0.0162 | 0.9524 | 1.0 | 0.9756 | 0.046 |
| No log | 2.34 | 1000 | 0.1027 | 0.9425 | 0.82 | 0.8770 | 0.4 |
| No log | 2.34 | 1000 | 0.0066 | 0.9756 | 1.0 | 0.9877 | 0.3000 |
| No log | 2.34 | 1000 | 0.1150 | 0.8763 | 0.85 | 0.8629 | 0.4 |
| No log | 2.34 | 1000 | 0.0108 | 0.9479 | 1.0 | 0.9732 | 0.5 |
| No log | 2.34 | 1000 | 0.0094 | 0.9660 | 0.995 | 0.9803 | 0.2 |
| No log | 2.34 | 1000 | 0.0090 | 0.9458 | 0.96 | 0.9529 | 0.9 |
| No log | 2.34 | 1000 | 0.0483 | 0.8733 | 0.965 | 0.9169 | 0.2 |
| No log | 2.34 | 1000 | 0.0070 | 0.9660 | 0.995 | 0.9803 | 0.3000 |
| No log | 2.34 | 1000 | 0.0121 | 0.9519 | 0.99 | 0.9706 | 0.8 |
| No log | 2.34 | 1000 | 0.0088 | 0.9431 | 0.995 | 0.9684 | 0.7000 |
| No log | 2.34 | 1000 | 0.0148 | 0.9567 | 0.995 | 0.9755 | 0.078 |
| No log | 2.34 | 1000 | 0.0081 | 0.9662 | 1.0 | 0.9828 | 0.4 |
| No log | 2.34 | 1000 | 0.0311 | 0.9072 | 0.88 | 0.8934 | 0.3000 |
| No log | 2.34 | 1000 | 0.0560 | 0.8664 | 0.94 | 0.9017 | 0.2 |
| No log | 2.34 | 1000 | 0.0094 | 0.9372 | 0.97 | 0.9533 | 0.2 |
| No log | 2.34 | 1000 | 0.0617 | 0.8615 | 0.84 | 0.8506 | 0.6 |
| No log | 2.34 | 1000 | 0.0104 | 0.9567 | 0.995 | 0.9755 | 0.3000 |
| No log | 2.34 | 1000 | 0.0153 | 0.9515 | 0.98 | 0.9655 | 0.6 |
| No log | 2.34 | 1000 | 0.0151 | 0.8676 | 0.95 | 0.9069 | 0.4 |
| No log | 2.34 | 1000 | 0.0081 | 0.9634 | 0.92 | 0.9412 | 0.4 |
| No log | 2.34 | 1000 | 0.0181 | 0.9519 | 0.99 | 0.9706 | 0.5 |
| No log | 2.34 | 1000 | 0.0139 | 0.9444 | 0.935 | 0.9397 | 0.3000 |
| No log | 2.34 | 1000 | 0.0571 | 0.9476 | 0.905 | 0.9258 | 0.2 |
| No log | 2.34 | 1000 | 0.0238 | 0.9198 | 0.86 | 0.8889 | 0.7000 |
| No log | 2.34 | 1000 | 0.0815 | 0.6917 | 0.8838 | 0.7761 | 0.003 |
| No log | 2.34 | 1000 | 0.0260 | 0.9554 | 0.965 | 0.9602 | 0.2 |
| No log | 2.34 | 1000 | 0.1174 | 0.7981 | 0.83 | 0.8137 | 0.3000 |
| No log | 2.34 | 1000 | 0.0195 | 0.9270 | 0.825 | 0.8730 | 0.7000 |
| No log | 2.34 | 1000 | 0.0673 | 0.7583 | 0.8 | 0.7786 | 0.4 |
| No log | 2.34 | 1000 | 0.0230 | 0.9072 | 0.88 | 0.8934 | 0.4 |
| No log | 2.34 | 1000 | 0.0781 | 0.8477 | 0.835 | 0.8413 | 0.4 |
| No log | 2.34 | 1000 | 0.0909 | 0.7981 | 0.85 | 0.8232 | 0.4 |
| No log | 2.34 | 1000 | 0.0610 | 0.6566 | 0.5477 | 0.5973 | 0.5 |
| No log | 2.34 | 1000 | 0.0520 | 0.8408 | 0.845 | 0.8429 | 0.5 |
| No log | 2.34 | 1000 | 0.0459 | 0.8621 | 0.875 | 0.8685 | 0.5 |
| No log | 2.34 | 1000 | 0.0935 | 0.8081 | 0.8 | 0.8040 | 0.4 |
| No log | 2.34 | 1000 | 0.0434 | 0.8303 | 0.905 | 0.8660 | 0.4 |
| No log | 2.34 | 1000 | 0.0249 | 0.89 | 0.89 | 0.89 | 0.5 |
| No log | 2.34 | 1000 | 0.0683 | 0.7814 | 0.84 | 0.8096 | 0.4 |
| No log | 2.34 | 1000 | 0.0629 | 0.8894 | 0.885 | 0.8872 | 0.6 |
| No log | 2.34 | 1000 | 0.0558 | 0.8841 | 0.915 | 0.8993 | 0.5 |
| No log | 2.34 | 1000 | 0.0471 | 0.8429 | 0.805 | 0.8235 | 0.6 |
| No log | 2.34 | 1000 | 0.0343 | 0.8770 | 0.8241 | 0.8497 | 0.6 |
| No log | 2.34 | 1000 | 0.0623 | 0.7232 | 0.81 | 0.7642 | 0.4 |
| No log | 2.34 | 1000 | 0.0754 | 0.7477 | 0.8 | 0.7729 | 0.5 |
| No log | 2.34 | 1000 | 0.0556 | 0.8311 | 0.91 | 0.8687 | 0.3000 |
| No log | 2.34 | 1000 | 0.0426 | 0.8317 | 0.865 | 0.8480 | 0.3000 |
| No log | 2.34 | 1000 | 0.1347 | 0.8579 | 0.845 | 0.8514 | 0.2 |
| No log | 2.34 | 1000 | 0.0259 | 0.8057 | 0.85 | 0.8273 | 0.4 |
| No log | 2.34 | 1000 | 0.0352 | 0.8969 | 0.8832 | 0.8900 | 0.2 |
| No log | 2.34 | 1000 | 0.0676 | 0.8634 | 0.885 | 0.8741 | 0.5 |
| No log | 2.34 | 1000 | 0.0774 | 0.7477 | 0.8040 | 0.7748 | 0.3000 |
| No log | 2.34 | 1000 | 0.0234 | 0.7478 | 0.86 | 0.8000 | 0.2 |
| No log | 2.34 | 1000 | 0.0841 | 0.7266 | 0.93 | 0.8158 | 0.3000 |
| No log | 2.34 | 1000 | 0.0331 | 0.8177 | 0.83 | 0.8238 | 0.4 |
| No log | 2.34 | 1000 | 0.0620 | 0.835 | 0.835 | 0.835 | 0.5 |
| No log | 2.34 | 1000 | 0.0700 | 0.8830 | 0.83 | 0.8557 | 0.6 |
| No log | 2.34 | 1000 | 0.1109 | 0.7773 | 0.82 | 0.7981 | 0.5 |
| No log | 2.34 | 1000 | 0.0744 | 0.7131 | 0.895 | 0.7938 | 0.097 |
| No log | 2.34 | 1000 | 0.0612 | 0.8137 | 0.83 | 0.8218 | 0.5 |
| No log | 2.34 | 1000 | 0.0507 | 0.8018 | 0.8945 | 0.8456 | 0.4 |
| No log | 2.34 | 1000 | 0.1478 | 0.6885 | 0.84 | 0.7568 | 0.098 |
| No log | 2.34 | 1000 | 0.0761 | 0.5574 | 0.51 | 0.5326 | 0.4 |
| No log | 2.34 | 1000 | 0.0926 | 0.9274 | 0.83 | 0.8760 | 0.5 |
| No log | 2.34 | 1000 | 0.2438 | 0.3158 | 0.7839 | 0.4502 | 0.001 |
| No log | 2.34 | 1000 | 0.0760 | 0.8944 | 0.8090 | 0.8496 | 0.4 |
| No log | 2.34 | 1000 | 0.0865 | 0.6900 | 0.935 | 0.7941 | 0.2 |
| No log | 2.34 | 1000 | 0.0248 | 0.9275 | 0.9040 | 0.9156 | 0.2 |
| No log | 2.34 | 1000 | 0.0621 | 0.8832 | 0.945 | 0.9130 | 0.3000 |
| No log | 2.34 | 1000 | 0.0270 | 0.8585 | 0.8844 | 0.8713 | 0.3000 |
| No log | 2.34 | 1000 | 0.0352 | 0.8743 | 0.765 | 0.8160 | 0.7000 |
| No log | 2.34 | 1000 | 0.0413 | 0.7622 | 0.705 | 0.7325 | 0.5 |
| No log | 2.34 | 1000 | 0.0624 | 0.8408 | 0.845 | 0.8429 | 0.6 |
| No log | 2.34 | 1000 | 0.0608 | 0.6412 | 0.84 | 0.7273 | 0.3000 |
| No log | 2.34 | 1000 | 0.0594 | 0.8981 | 0.97 | 0.9327 | 0.2 |
| No log | 2.34 | 1000 | 0.0529 | 0.8488 | 0.87 | 0.8593 | 0.5 |
| No log | 2.34 | 1000 | 0.1257 | 0.7017 | 0.835 | 0.7626 | 0.4 |
| No log | 2.34 | 1000 | 0.0196 | 0.8820 | 0.7136 | 0.7889 | 0.8 |
| No log | 2.34 | 1000 | 0.1820 | 0.5320 | 0.7940 | 0.6371 | 0.017 |
| No log | 2.34 | 1000 | 0.0939 | 0.8763 | 0.85 | 0.8629 | 0.5 |
| No log | 2.34 | 1000 | 0.0514 | 0.8684 | 0.825 | 0.8462 | 0.7000 |
| No log | 2.34 | 1000 | 0.0613 | 0.8738 | 0.9 | 0.8867 | 0.5 |
| No log | 2.34 | 1000 | 0.0986 | 0.8729 | 0.79 | 0.8294 | 0.3000 |
| No log | 2.34 | 1000 | 0.0565 | 0.7939 | 0.905 | 0.8458 | 0.3000 |
| No log | 2.34 | 1000 | 0.1316 | 0.7121 | 0.94 | 0.8103 | 0.083 |
| No log | 2.34 | 1000 | 0.0383 | 0.7991 | 0.935 | 0.8618 | 0.4 |
| No log | 2.34 | 1000 | 0.0592 | 0.8763 | 0.85 | 0.8629 | 0.5 |
| No log | 2.34 | 1000 | 0.0166 | 0.8440 | 0.92 | 0.8804 | 0.4 |
| No log | 2.34 | 1000 | 0.0976 | 0.7258 | 0.9 | 0.8036 | 0.3000 |
| No log | 2.34 | 1000 | 0.0367 | 0.8808 | 0.85 | 0.8651 | 0.7000 |
| No log | 2.34 | 1000 | 0.0562 | 0.8522 | 0.865 | 0.8586 | 0.5 |
| No log | 2.34 | 1000 | 0.0367 | 0.8967 | 0.825 | 0.8594 | 0.6 |
| No log | 2.34 | 1000 | 0.0897 | 0.7962 | 0.84 | 0.8175 | 0.4 |
| No log | 2.34 | 1000 | 0.0377 | 0.8995 | 0.895 | 0.8972 | 0.6 |
| No log | 2.34 | 1000 | 0.0714 | 0.7193 | 0.615 | 0.6631 | 0.3000 |
| No log | 2.34 | 1000 | 0.1205 | 0.7513 | 0.74 | 0.7456 | 0.4 |
| No log | 2.34 | 1000 | 0.0276 | 0.8643 | 0.86 | 0.8622 | 0.3000 |
| No log | 2.34 | 1000 | 0.0694 | 0.8359 | 0.815 | 0.8253 | 0.6 |
| No log | 2.34 | 1000 | 0.0895 | 0.8075 | 0.86 | 0.8329 | 0.4 |
| No log | 2.34 | 1000 | 0.0513 | 0.8325 | 0.87 | 0.8509 | 0.6 |
| No log | 2.34 | 1000 | 0.0238 | 0.8287 | 0.895 | 0.8606 | 0.4 |
| No log | 2.34 | 1000 | 0.0171 | 0.8261 | 0.95 | 0.8837 | 0.089 |
| No log | 2.34 | 1000 | 0.1072 | 0.7020 | 0.895 | 0.7868 | 0.3000 |
| No log | 2.34 | 1000 | 0.0608 | 0.612 | 0.765 | 0.6800 | 0.2 |
| No log | 2.34 | 1000 | 0.0859 | 0.8384 | 0.96 | 0.8951 | 0.031 |
| No log | 2.34 | 1000 | 0.1109 | 0.4394 | 0.78 | 0.5622 | 0.0360 |
| No log | 2.34 | 1000 | 0.0562 | 0.7668 | 0.8636 | 0.8124 | 0.082 |
| No log | 2.34 | 1000 | 0.0633 | 0.8634 | 0.885 | 0.8741 | 0.2 |
| No log | 2.34 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.002 |
| No log | 2.34 | 1000 | 0.0096 | 0.8111 | 0.8934 | 0.8502 | 0.7000 |
| No log | 2.34 | 1000 | 0.0042 | 0.9206 | 0.985 | 0.9517 | 0.4 |
| No log | 2.34 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.008 |
| No log | 2.34 | 1000 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.035 |
| No log | 2.34 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.032 |
| No log | 2.34 | 1000 | 0.0023 | 0.9947 | 1.0 | 0.9973 | 0.021 |
| No log | 2.34 | 1000 | 0.0012 | 1.0 | 0.995 | 0.9975 | 0.4 |
| No log | 2.34 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 2.34 | 1000 | 0.0033 | 0.9900 | 0.995 | 0.9925 | 0.9 |
| No log | 2.34 | 1000 | 0.0007 | 1.0 | 1.0 | 1.0 | 0.4 |
| No log | 2.34 | 1000 | 0.0085 | 0.9897 | 0.965 | 0.9772 | 0.0730 |
| No log | 2.34 | 1000 | 0.0149 | 0.9946 | 0.925 | 0.9585 | 0.2 |
| No log | 2.34 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 2.34 | 1000 | 0.0156 | 0.9643 | 0.945 | 0.9545 | 0.2 |
| No log | 2.34 | 1000 | 0.0008 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 2.34 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.003 |
| No log | 2.34 | 1000 | 0.0030 | 0.9949 | 0.985 | 0.9899 | 0.3000 |
| No log | 2.34 | 1000 | 0.0010 | 0.995 | 0.995 | 0.995 | 0.4 |
| No log | 2.34 | 1000 | 0.0031 | 0.98 | 0.98 | 0.98 | 0.8 |
| No log | 2.34 | 1000 | 0.0353 | 0.9274 | 0.83 | 0.8760 | 0.7000 |
| No log | 2.34 | 1000 | 0.0006 | 0.9950 | 1.0 | 0.9975 | 0.031 |
| No log | 2.34 | 1000 | 0.0012 | 0.9852 | 1.0 | 0.9926 | 0.032 |
| No log | 2.34 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.011 |
| No log | 2.34 | 1000 | 0.0005 | 1.0 | 0.995 | 0.9975 | 0.7000 |
| No log | 2.34 | 1000 | 0.0012 | 0.995 | 0.995 | 0.995 | 0.4 |
| No log | 2.34 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.089 |
| No log | 2.34 | 1000 | 0.0052 | 0.9289 | 0.98 | 0.9538 | 0.4 |
| No log | 2.34 | 1000 | 0.0019 | 0.9901 | 1.0 | 0.9950 | 0.045 |
| No log | 2.34 | 1000 | 0.0176 | 0.9845 | 0.955 | 0.9695 | 0.038 |
| No log | 2.34 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.4 |
| No log | 2.34 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.002 |
| No log | 2.34 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.004 |
| No log | 2.34 | 1000 | 0.0006 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 2.34 | 1000 | 0.0015 | 0.9901 | 1.0 | 0.9950 | 0.2 |
| No log | 2.34 | 1000 | 0.0005 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 2.34 | 1000 | 0.0033 | 0.9792 | 1.0 | 0.9895 | 0.02 |
| No log | 2.34 | 1000 | 0.0040 | 0.9463 | 0.97 | 0.9580 | 0.2 |
| No log | 2.34 | 1000 | 0.0041 | 0.9804 | 1.0 | 0.9901 | 0.5 |
| No log | 2.34 | 1000 | 0.0011 | 1.0 | 1.0 | 1.0 | 0.7000 |
| No log | 2.34 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 2.34 | 1000 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 2.34 | 1000 | 0.0015 | 0.9900 | 0.995 | 0.9925 | 0.0860 |
| No log | 2.34 | 1000 | 0.0173 | 0.9108 | 0.9798 | 0.9440 | 0.4 |
| No log | 2.34 | 1000 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.8 |
| No log | 2.34 | 1000 | 0.0022 | 0.99 | 0.99 | 0.99 | 0.7000 |
| No log | 2.34 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.004 |
| No log | 2.34 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 2.34 | 1000 | 0.0104 | 0.9524 | 1.0 | 0.9756 | 0.058 |
| No log | 2.34 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.5 |
| No log | 2.34 | 1000 | 0.0162 | 0.9381 | 0.91 | 0.9239 | 0.4 |
| No log | 2.34 | 1000 | 0.0026 | 0.9950 | 1.0 | 0.9975 | 0.003 |
| No log | 2.34 | 1000 | 0.0010 | 0.995 | 0.995 | 0.995 | 0.8 |
| No log | 2.34 | 1000 | 0.0010 | 0.9901 | 1.0 | 0.9950 | 0.2 |
| No log | 2.34 | 1000 | 0.0008 | 0.9950 | 1.0 | 0.9975 | 0.4 |
| No log | 2.34 | 1000 | 0.0112 | 0.7934 | 0.96 | 0.8688 | 0.049 |
| No log | 2.34 | 1000 | 0.0083 | 0.9950 | 0.99 | 0.9925 | 0.002 |
| No log | 2.34 | 1000 | 0.0076 | 0.98 | 0.98 | 0.98 | 0.2 |
| No log | 2.34 | 1000 | 0.0169 | 0.7119 | 0.8528 | 0.7760 | 0.6 |
| No log | 2.34 | 1000 | 0.0068 | 0.9139 | 0.955 | 0.9340 | 0.3000 |
| No log | 2.34 | 1000 | 0.0378 | 0.8990 | 0.89 | 0.8945 | 0.5 |
| No log | 2.34 | 1000 | 0.1370 | 0.6301 | 0.8214 | 0.7132 | 0.05 |
| No log | 2.34 | 1000 | 0.0056 | 0.9594 | 0.945 | 0.9521 | 0.7000 |
| No log | 2.34 | 1000 | 0.0923 | 0.8030 | 0.8670 | 0.8338 | 0.3000 |
| No log | 2.34 | 1000 | 0.0283 | 0.8565 | 0.955 | 0.9031 | 0.2 |
| No log | 2.34 | 1000 | 0.0281 | 0.8838 | 0.875 | 0.8794 | 0.4 |
| No log | 2.34 | 1000 | 0.0279 | 0.8786 | 0.905 | 0.8916 | 0.3000 |
| No log | 2.34 | 1000 | 0.0259 | 0.8761 | 0.955 | 0.9139 | 0.4 |
| No log | 2.34 | 1000 | 0.0238 | 0.9355 | 0.87 | 0.9016 | 0.5 |
| No log | 2.34 | 1000 | 0.0329 | 0.8317 | 0.865 | 0.8480 | 0.3000 |
| No log | 2.34 | 1000 | 0.0187 | 0.8233 | 0.885 | 0.8530 | 0.4 |
| No log | 2.34 | 1000 | 0.0405 | 0.8483 | 0.755 | 0.7989 | 0.6 |
| No log | 2.34 | 1000 | 0.0496 | 0.8495 | 0.875 | 0.8621 | 0.3000 |
| No log | 2.34 | 1000 | 0.0021 | 0.9950 | 1.0 | 0.9975 | 0.081 |
| No log | 2.34 | 1000 | 0.0317 | 0.8852 | 0.81 | 0.8460 | 0.7000 |
| No log | 2.34 | 1000 | 0.0276 | 0.7973 | 0.885 | 0.8389 | 0.2 |
| No log | 2.34 | 1000 | 0.0277 | 0.8674 | 0.7929 | 0.8285 | 0.5 |
| No log | 2.34 | 1000 | 0.0630 | 0.6751 | 0.8040 | 0.7339 | 0.3000 |
| No log | 2.34 | 1000 | 0.0246 | 0.8213 | 0.85 | 0.8354 | 0.3000 |
| No log | 2.34 | 1000 | 0.0176 | 0.9086 | 0.895 | 0.9018 | 0.4 |
| No log | 2.34 | 1000 | 0.0065 | 0.9592 | 0.94 | 0.9495 | 0.5 |
| No log | 2.34 | 1000 | 0.0311 | 0.8342 | 0.805 | 0.8193 | 0.6 |
| No log | 2.34 | 1000 | 0.0336 | 0.7902 | 0.885 | 0.8349 | 0.2 |
| No log | 2.34 | 1000 | 0.0280 | 0.8861 | 0.895 | 0.8905 | 0.5 |
| No log | 2.34 | 1000 | 0.0248 | 0.7339 | 0.8 | 0.7656 | 0.4 |
| No log | 2.34 | 1000 | 0.0281 | 0.8488 | 0.87 | 0.8593 | 0.4 |
| No log | 2.34 | 1000 | 0.0300 | 0.9722 | 0.875 | 0.9211 | 0.4 |
| No log | 2.34 | 1000 | 0.0658 | 0.8 | 0.68 | 0.7351 | 0.5 |
| No log | 2.34 | 1000 | 0.0119 | 0.985 | 0.985 | 0.985 | 0.2 |
| No log | 2.34 | 1000 | 0.0011 | 0.9901 | 1.0 | 0.9950 | 0.0260 |
| No log | 2.34 | 1000 | 0.0141 | 0.965 | 0.965 | 0.965 | 0.4 |
| No log | 2.34 | 1000 | 0.0430 | 0.7990 | 0.795 | 0.7970 | 0.3000 |
| No log | 2.34 | 1000 | 0.0457 | 0.7432 | 0.825 | 0.7820 | 0.4 |
| No log | 2.34 | 1000 | 0.1199 | 0.6154 | 0.6809 | 0.6465 | 0.4 |
| No log | 2.34 | 1000 | 0.0323 | 0.7965 | 0.685 | 0.7366 | 0.4 |
| No log | 2.34 | 1000 | 0.0131 | 0.9397 | 0.935 | 0.9373 | 0.5 |
| No log | 2.34 | 1000 | 0.0237 | 0.9118 | 0.93 | 0.9208 | 0.4 |
| No log | 2.34 | 1000 | 0.0046 | 0.9851 | 0.99 | 0.9875 | 0.5 |
| No log | 2.34 | 1000 | 0.0277 | 0.8626 | 0.91 | 0.8856 | 0.2 |
| No log | 2.34 | 1000 | 0.0314 | 0.7188 | 0.69 | 0.7041 | 0.6 |
| No log | 2.34 | 1000 | 0.0664 | 0.7028 | 0.7641 | 0.7322 | 0.4 |
| No log | 2.34 | 1000 | 0.0202 | 0.9531 | 0.915 | 0.9337 | 0.5 |
| No log | 2.34 | 1000 | 0.0202 | 0.8517 | 0.89 | 0.8704 | 0.5 |
| No log | 2.34 | 1000 | 0.0034 | 0.9836 | 1.0 | 0.9917 | 0.3000 |
| No log | 2.34 | 1000 | 0.0117 | 0.9220 | 0.945 | 0.9333 | 0.4 |
| No log | 2.34 | 1000 | 0.0292 | 0.9223 | 0.89 | 0.9059 | 0.5 |
| No log | 2.34 | 1000 | 0.0056 | 0.95 | 0.95 | 0.9500 | 0.4 |
| No log | 2.34 | 1000 | 0.0170 | 0.9231 | 0.9 | 0.9114 | 0.4 |
| No log | 2.34 | 1000 | 0.0691 | 0.8534 | 0.815 | 0.8338 | 0.3000 |
| No log | 2.34 | 1000 | 0.0363 | 0.7358 | 0.7959 | 0.7647 | 0.4 |
| No log | 2.34 | 1000 | 0.0034 | 0.9852 | 1.0 | 0.9926 | 0.3000 |
| No log | 2.34 | 1000 | 0.0774 | 0.7477 | 0.8 | 0.7729 | 0.4 |
| No log | 2.34 | 1000 | 0.0358 | 0.6071 | 0.425 | 0.5 | 0.3000 |
| No log | 2.34 | 1000 | 0.0281 | 0.9239 | 0.85 | 0.8854 | 0.2 |
| No log | 2.34 | 1000 | 0.1175 | 0.7048 | 0.74 | 0.7220 | 0.5 |
| No log | 2.34 | 1000 | 0.0831 | 0.5694 | 0.82 | 0.6721 | 0.2 |
| No log | 2.34 | 1000 | 0.0631 | 0.6346 | 0.825 | 0.7174 | 0.092 |
| No log | 2.34 | 1000 | 0.1007 | 0.7137 | 0.91 | 0.8 | 0.054 |
| No log | 2.34 | 1000 | 0.0432 | 0.7309 | 0.91 | 0.8107 | 0.3000 |
| No log | 2.34 | 1000 | 0.0389 | 0.7900 | 0.865 | 0.8258 | 0.4 |
| No log | 2.34 | 1000 | 0.0453 | 0.5302 | 0.5729 | 0.5507 | 0.3000 |
| No log | 2.34 | 1000 | 0.0521 | 0.7969 | 0.765 | 0.7806 | 0.4 |
| No log | 2.34 | 1000 | 0.0532 | 0.6667 | 0.8364 | 0.7419 | 0.4 |
| No log | 2.34 | 1000 | 0.0410 | 0.8373 | 0.875 | 0.8557 | 0.3000 |
| No log | 2.34 | 1000 | 0.0410 | 0.8373 | 0.875 | 0.8557 | 0.3000 |
| No log | 2.34 | 1000 | 0.0397 | 0.7944 | 0.85 | 0.8213 | 0.3000 |
| No log | 2.34 | 1000 | 0.0509 | 0.7939 | 0.905 | 0.8458 | 0.2 |
| No log | 2.34 | 1000 | 0.0346 | 0.85 | 0.765 | 0.8053 | 0.4 |
| No log | 2.34 | 1000 | 0.0393 | 0.8241 | 0.82 | 0.8221 | 0.5 |
| No log | 2.34 | 1000 | 0.0865 | 0.6851 | 0.805 | 0.7402 | 0.2 |
| No log | 2.34 | 1000 | 0.0472 | 0.7453 | 0.79 | 0.7670 | 0.4 |
| No log | 2.34 | 1000 | 0.0429 | 0.8087 | 0.93 | 0.8651 | 0.2 |
| No log | 2.34 | 1000 | 0.1605 | 0.6634 | 0.68 | 0.6716 | 0.2 |
| No log | 2.34 | 1000 | 0.0520 | 0.7536 | 0.795 | 0.7737 | 0.4 |
| No log | 2.34 | 1000 | 0.1162 | 0.6987 | 0.8392 | 0.7626 | 0.0720 |
| No log | 2.34 | 1000 | 0.0347 | 0.8318 | 0.89 | 0.8599 | 0.3000 |
| No log | 2.34 | 1000 | 0.0347 | 0.8318 | 0.89 | 0.8599 | 0.3000 |
| No log | 2.34 | 1000 | 0.0278 | 0.7857 | 0.7174 | 0.75 | 0.5 |
| No log | 2.34 | 1000 | 0.0278 | 0.7857 | 0.7174 | 0.75 | 0.5 |
| No log | 2.34 | 1000 | 0.0384 | 0.8474 | 0.805 | 0.8256 | 0.4 |
| No log | 2.34 | 1000 | 0.0435 | 0.5181 | 0.5 | 0.5089 | 0.4 |
| No log | 2.34 | 1000 | 0.0522 | 0.5238 | 0.8462 | 0.6471 | 0.091 |
| No log | 2.34 | 1000 | 0.0333 | 0.8232 | 0.745 | 0.7822 | 0.6 |
| No log | 2.34 | 1000 | 0.0367 | 0.6017 | 0.71 | 0.6514 | 0.4 |
| No log | 2.34 | 1000 | 0.0530 | 0.6946 | 0.83 | 0.7563 | 0.2 |
| No log | 2.34 | 1000 | 0.0396 | 0.8343 | 0.755 | 0.7927 | 0.7000 |
| No log | 2.34 | 1000 | 0.0600 | 0.7348 | 0.845 | 0.7860 | 0.3000 |
| No log | 2.34 | 1000 | 0.1126 | 0.5 | 0.3173 | 0.3882 | 0.3000 |
| No log | 2.34 | 1000 | 0.0775 | 0.7523 | 0.82 | 0.7847 | 0.3000 |
| No log | 2.34 | 1000 | 0.0523 | 0.7703 | 0.855 | 0.8104 | 0.5 |
| No log | 2.34 | 1000 | 0.0972 | 0.7876 | 0.76 | 0.7735 | 0.8 |
| No log | 2.34 | 1000 | 0.1203 | 0.6971 | 0.84 | 0.7619 | 0.6 |
| No log | 2.34 | 1000 | 0.0707 | 0.6381 | 0.855 | 0.7308 | 0.2 |
| No log | 2.34 | 1000 | 0.1316 | 0.8054 | 0.89 | 0.8456 | 0.099 |
| No log | 2.34 | 1000 | 0.1522 | 0.4435 | 0.55 | 0.4911 | 0.015 |
| No log | 2.34 | 1000 | 0.0669 | 0.5134 | 0.575 | 0.5425 | 0.3000 |
| No log | 2.34 | 1000 | 0.0756 | 0.7846 | 0.965 | 0.8655 | 0.6 |
| No log | 2.34 | 1000 | 0.0534 | 0.4922 | 0.63 | 0.5526 | 0.0300 |
| No log | 2.34 | 1000 | 0.0616 | 0.7788 | 0.88 | 0.8263 | 0.2 |
| No log | 2.34 | 1000 | 0.0580 | 0.8889 | 0.6667 | 0.7619 | 0.5 |
| No log | 2.34 | 1000 | 0.0486 | 0.7287 | 0.9 | 0.8054 | 0.2 |
| No log | 2.34 | 1000 | 0.0402 | 0.8447 | 0.87 | 0.8571 | 0.4 |
| No log | 2.34 | 1000 | 0.0664 | 0.6916 | 0.74 | 0.7150 | 0.2 |
| No log | 2.34 | 1000 | 0.0490 | 0.7840 | 0.835 | 0.8087 | 0.4 |
| No log | 2.34 | 1000 | 0.0485 | 0.5076 | 0.665 | 0.5758 | 0.3000 |
| No log | 2.34 | 1000 | 0.0289 | 0.8739 | 0.97 | 0.9194 | 0.0880 |
| No log | 2.34 | 1000 | 0.0954 | 0.4286 | 0.6 | 0.5 | 0.4 |
| No log | 2.34 | 1000 | 0.0526 | 0.8020 | 0.8141 | 0.8080 | 0.5 |
| No log | 2.34 | 1000 | 0.1072 | 0.5976 | 0.4757 | 0.5297 | 0.3000 |
| No log | 2.34 | 1000 | 0.0871 | 0.5181 | 0.645 | 0.5746 | 0.035 |
| No log | 2.34 | 1000 | 0.0400 | 0.8524 | 0.895 | 0.8732 | 0.5 |
| No log | 2.34 | 1000 | 0.0618 | 0.7724 | 0.95 | 0.8520 | 0.07 |
| No log | 2.34 | 1000 | 0.0618 | 0.7724 | 0.95 | 0.8520 | 0.07 |
| No log | 2.34 | 1000 | 0.0461 | 0.6435 | 0.695 | 0.6683 | 0.2 |
| No log | 2.34 | 1000 | 0.0622 | 0.8203 | 0.8990 | 0.8578 | 0.4 |
| No log | 2.34 | 1000 | 0.0463 | 0.6721 | 0.8241 | 0.7404 | 0.3000 |
| No log | 2.34 | 1000 | 0.0532 | 0.7038 | 0.915 | 0.7957 | 0.2 |
| No log | 2.34 | 1000 | 0.0472 | 0.7870 | 0.85 | 0.8173 | 0.2 |
| No log | 2.34 | 1000 | 0.0422 | 0.7636 | 0.84 | 0.8000 | 0.3000 |
| No log | 2.34 | 1000 | 0.0516 | 0.7064 | 0.83 | 0.7632 | 0.6 |
| No log | 2.34 | 1000 | 0.0513 | 0.7661 | 0.835 | 0.7990 | 0.3000 |
| No log | 2.34 | 1000 | 0.0401 | 0.8636 | 0.855 | 0.8593 | 0.4 |
| No log | 2.34 | 1000 | 0.0501 | 0.7536 | 0.78 | 0.7666 | 0.5 |
| No log | 2.34 | 1000 | 0.0321 | 0.8846 | 0.805 | 0.8429 | 0.5 |
| No log | 2.34 | 1000 | 0.0655 | 0.7277 | 0.855 | 0.7862 | 0.2 |
| No log | 2.34 | 1000 | 0.0532 | 0.4387 | 0.68 | 0.5333 | 0.074 |
| No log | 2.34 | 1000 | 0.0596 | 0.5510 | 0.675 | 0.6067 | 0.5 |
| No log | 2.34 | 1000 | 0.0501 | 0.7319 | 0.86 | 0.7908 | 0.0880 |
| No log | 2.34 | 1000 | 0.0648 | 0.6622 | 0.745 | 0.7012 | 0.7000 |
| No log | 2.34 | 1000 | 0.0582 | 0.7658 | 0.85 | 0.8057 | 0.4 |
| No log | 2.34 | 1000 | 0.0396 | 0.7980 | 0.81 | 0.8040 | 0.4 |
| No log | 2.34 | 1000 | 0.1084 | 0.5018 | 0.705 | 0.5863 | 0.3000 |
| No log | 2.34 | 1000 | 0.0701 | 0.7895 | 0.825 | 0.8068 | 0.4 |
| No log | 2.34 | 1000 | 0.0474 | 0.6466 | 0.805 | 0.7171 | 0.4 |
| No log | 2.34 | 1000 | 0.0474 | 0.6466 | 0.805 | 0.7171 | 0.4 |
| No log | 2.34 | 1000 | 0.0474 | 0.6466 | 0.805 | 0.7171 | 0.4 |
| No log | 2.34 | 1000 | 0.0474 | 0.6466 | 0.805 | 0.7171 | 0.4 |
| No log | 2.34 | 1000 | 0.1290 | 0.5256 | 0.5707 | 0.5472 | 0.1 |
| No log | 2.34 | 1000 | 0.0608 | 0.7523 | 0.8131 | 0.7816 | 0.6 |
| No log | 2.34 | 1000 | 0.0189 | 0.9282 | 0.97 | 0.9487 | 0.2 |
| No log | 2.34 | 1000 | 0.0021 | 0.9901 | 1.0 | 0.9950 | 0.2 |
| No log | 2.34 | 1000 | 0.0027 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 2.34 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 2.34 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.097 |
| No log | 2.34 | 1000 | 0.0003 | 0.9950 | 1.0 | 0.9975 | 0.3000 |
| No log | 2.34 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.056 |
| No log | 2.34 | 1000 | 0.0026 | 0.9803 | 0.995 | 0.9876 | 0.6 |
| No log | 2.34 | 1000 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 2.34 | 1000 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 2.34 | 1000 | 0.0198 | 0.9890 | 0.9 | 0.9424 | 0.4 |
| No log | 2.34 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.056 |
| No log | 2.34 | 1000 | 0.0371 | 0.9195 | 0.8 | 0.8556 | 0.2 |
| No log | 2.34 | 1000 | 0.0018 | 0.9901 | 1.0 | 0.9950 | 0.024 |
| No log | 2.34 | 1000 | 0.0003 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 2.34 | 1000 | 0.0032 | 0.9851 | 0.995 | 0.9900 | 0.9 |
| No log | 2.34 | 1000 | 0.0051 | 0.9701 | 0.975 | 0.9726 | 0.6 |
| No log | 2.34 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.021 |
| No log | 2.34 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.058 |
| No log | 2.34 | 1000 | 0.0021 | 0.995 | 0.995 | 0.995 | 0.4 |
| No log | 2.34 | 1000 | 0.0009 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 2.34 | 1000 | 0.0170 | 0.9340 | 0.92 | 0.9270 | 0.081 |
| No log | 2.34 | 1000 | 0.1219 | 0.4164 | 0.56 | 0.4776 | 0.7000 |
| No log | 2.34 | 1000 | 0.0952 | 0.3952 | 0.3379 | 0.3643 | 0.4 |
| No log | 2.34 | 1000 | 0.1251 | 0.675 | 0.675 | 0.675 | 0.3000 |
| No log | 2.34 | 1000 | 0.1113 | 0.6292 | 0.755 | 0.6864 | 0.4 |
| No log | 2.93 | 1250 | 0.0419 | 0.9630 | 0.91 | 0.9357 | 0.5 |
| No log | 2.93 | 1250 | 0.0121 | 0.9289 | 0.915 | 0.9219 | 0.5 |
| No log | 2.93 | 1250 | 0.0339 | 0.8660 | 0.905 | 0.8851 | 0.4 |
| No log | 2.93 | 1250 | 0.0163 | 0.8945 | 0.975 | 0.9330 | 0.093 |
| No log | 2.93 | 1250 | 0.0396 | 0.9340 | 0.92 | 0.9270 | 0.2 |
| No log | 2.93 | 1250 | 0.0100 | 0.9802 | 0.99 | 0.9851 | 0.7000 |
| No log | 2.93 | 1250 | 0.0143 | 0.9320 | 0.9648 | 0.9481 | 0.5 |
| No log | 2.93 | 1250 | 0.0105 | 0.9706 | 0.99 | 0.9802 | 0.4 |
| No log | 2.93 | 1250 | 0.0121 | 0.9434 | 1.0 | 0.9709 | 0.5 |
| No log | 2.93 | 1250 | 0.0309 | 0.9672 | 0.885 | 0.9243 | 0.7000 |
| No log | 2.93 | 1250 | 0.0077 | 0.9660 | 0.995 | 0.9803 | 0.4 |
| No log | 2.93 | 1250 | 0.0124 | 0.9463 | 0.97 | 0.9580 | 0.5 |
| No log | 2.93 | 1250 | 0.0073 | 0.9569 | 1.0 | 0.9780 | 0.2 |
| No log | 2.93 | 1250 | 0.0173 | 0.9522 | 0.995 | 0.9731 | 0.6 |
| No log | 2.93 | 1250 | 0.0136 | 0.9343 | 0.995 | 0.9637 | 0.2 |
| No log | 2.93 | 1250 | 0.0105 | 0.9259 | 1.0 | 0.9615 | 0.068 |
| No log | 2.93 | 1250 | 0.0096 | 0.9608 | 0.9949 | 0.9776 | 0.7000 |
| No log | 2.93 | 1250 | 0.0111 | 0.965 | 0.965 | 0.965 | 0.7000 |
| No log | 2.93 | 1250 | 0.0467 | 0.8732 | 0.8995 | 0.8861 | 0.6 |
| No log | 2.93 | 1250 | 0.0166 | 0.9259 | 1.0 | 0.9615 | 0.0300 |
| No log | 2.93 | 1250 | 0.0117 | 0.9343 | 0.995 | 0.9637 | 0.0370 |
| No log | 2.93 | 1250 | 0.0485 | 0.9458 | 0.96 | 0.9529 | 0.065 |
| No log | 2.93 | 1250 | 0.0044 | 0.9947 | 0.945 | 0.9692 | 0.6 |
| No log | 2.93 | 1250 | 0.0055 | 0.9949 | 0.9848 | 0.9898 | 0.2 |
| No log | 2.93 | 1250 | 0.0187 | 0.9474 | 0.99 | 0.9682 | 0.2 |
| No log | 2.93 | 1250 | 0.0500 | 0.9251 | 0.865 | 0.8941 | 0.5 |
| No log | 2.93 | 1250 | 0.0064 | 0.9275 | 0.9648 | 0.9458 | 0.3000 |
| No log | 2.93 | 1250 | 0.0216 | 0.9116 | 0.98 | 0.9446 | 0.3000 |
| No log | 2.93 | 1250 | 0.0163 | 0.9187 | 0.96 | 0.9389 | 0.4 |
| No log | 2.93 | 1250 | 0.0152 | 0.9476 | 0.995 | 0.9707 | 0.039 |
| No log | 2.93 | 1250 | 0.0307 | 0.9461 | 0.965 | 0.9554 | 0.2 |
| No log | 2.93 | 1250 | 0.0253 | 0.9557 | 0.97 | 0.9628 | 0.7000 |
| No log | 2.93 | 1250 | 0.0146 | 0.9336 | 0.985 | 0.9586 | 0.079 |
| No log | 2.93 | 1250 | 0.0120 | 0.9662 | 1.0 | 0.9828 | 0.2 |
| No log | 2.93 | 1250 | 0.0108 | 0.9519 | 0.99 | 0.9706 | 0.081 |
| No log | 2.93 | 1250 | 0.0805 | 0.8691 | 0.83 | 0.8491 | 0.4 |
| No log | 2.93 | 1250 | 0.0097 | 0.9378 | 0.98 | 0.9584 | 0.2 |
| No log | 2.93 | 1250 | 0.0146 | 0.9899 | 0.985 | 0.9875 | 0.3000 |
| No log | 2.93 | 1250 | 0.1217 | 0.7041 | 0.6935 | 0.6987 | 0.067 |
| No log | 2.93 | 1250 | 0.0422 | 0.9492 | 0.9397 | 0.9444 | 0.092 |
| No log | 2.93 | 1250 | 0.0494 | 0.8282 | 0.94 | 0.8806 | 0.3000 |
| No log | 2.93 | 1250 | 0.0115 | 0.9692 | 0.945 | 0.9570 | 0.2 |
| No log | 2.93 | 1250 | 0.0111 | 0.9756 | 1.0 | 0.9877 | 0.2 |
| No log | 2.93 | 1250 | 0.0058 | 0.9896 | 0.9598 | 0.9745 | 0.8 |
| No log | 2.93 | 1250 | 0.0065 | 0.9561 | 0.98 | 0.9679 | 0.6 |
| No log | 2.93 | 1250 | 0.0038 | 0.97 | 0.97 | 0.97 | 0.8 |
| No log | 2.93 | 1250 | 0.0094 | 0.9569 | 1.0 | 0.9780 | 0.4 |
| No log | 2.93 | 1250 | 0.0317 | 0.8957 | 0.945 | 0.9197 | 0.5 |
| No log | 2.93 | 1250 | 0.0123 | 0.9804 | 1.0 | 0.9901 | 0.2 |
| No log | 2.93 | 1250 | 0.0247 | 0.9703 | 0.98 | 0.9751 | 0.7000 |
| No log | 2.93 | 1250 | 0.0155 | 0.9799 | 0.975 | 0.9774 | 0.8 |
| No log | 2.93 | 1250 | 0.0046 | 0.9242 | 0.9898 | 0.9559 | 0.5 |
| No log | 2.93 | 1250 | 0.1172 | 0.7368 | 0.77 | 0.7531 | 0.099 |
| No log | 2.93 | 1250 | 0.0624 | 0.88 | 0.88 | 0.88 | 0.3000 |
| No log | 2.93 | 1250 | 0.0098 | 0.9660 | 0.995 | 0.9803 | 0.4 |
| No log | 2.93 | 1250 | 0.0158 | 0.9569 | 1.0 | 0.9780 | 0.089 |
| No log | 2.93 | 1250 | 0.1054 | 0.9379 | 0.83 | 0.8806 | 0.4 |
| No log | 2.93 | 1250 | 0.0062 | 0.9803 | 0.995 | 0.9876 | 0.7000 |
| No log | 2.93 | 1250 | 0.1195 | 0.8404 | 0.895 | 0.8668 | 0.2 |
| No log | 2.93 | 1250 | 0.0110 | 0.9434 | 1.0 | 0.9709 | 0.5 |
| No log | 2.93 | 1250 | 0.0096 | 0.9662 | 1.0 | 0.9828 | 0.2 |
| No log | 2.93 | 1250 | 0.0089 | 0.9289 | 0.98 | 0.9538 | 0.7000 |
| No log | 2.93 | 1250 | 0.0469 | 0.8981 | 0.925 | 0.9113 | 0.5 |
| No log | 2.93 | 1250 | 0.0068 | 0.9660 | 0.995 | 0.9803 | 0.3000 |
| No log | 2.93 | 1250 | 0.0121 | 0.975 | 0.975 | 0.975 | 0.9 |
| No log | 2.93 | 1250 | 0.0088 | 0.9515 | 0.98 | 0.9655 | 0.8 |
| No log | 2.93 | 1250 | 0.0147 | 0.9567 | 0.995 | 0.9755 | 0.1 |
| No log | 2.93 | 1250 | 0.0082 | 0.9615 | 1.0 | 0.9804 | 0.3000 |
| No log | 2.93 | 1250 | 0.0314 | 0.9072 | 0.88 | 0.8934 | 0.3000 |
| No log | 2.93 | 1250 | 0.0562 | 0.8507 | 0.94 | 0.8931 | 0.2 |
| No log | 2.93 | 1250 | 0.0090 | 0.9369 | 0.965 | 0.9507 | 0.2 |
| No log | 2.93 | 1250 | 0.0617 | 0.8515 | 0.86 | 0.8557 | 0.6 |
| No log | 2.93 | 1250 | 0.0106 | 0.9612 | 0.99 | 0.9754 | 0.4 |
| No log | 2.93 | 1250 | 0.0152 | 0.9471 | 0.985 | 0.9657 | 0.6 |
| No log | 2.93 | 1250 | 0.0152 | 0.8489 | 0.955 | 0.8988 | 0.4 |
| No log | 2.93 | 1250 | 0.0076 | 0.9592 | 0.94 | 0.9495 | 0.4 |
| No log | 2.93 | 1250 | 0.0182 | 0.9519 | 0.99 | 0.9706 | 0.4 |
| No log | 2.93 | 1250 | 0.0138 | 0.9538 | 0.93 | 0.9418 | 0.4 |
| No log | 2.93 | 1250 | 0.0619 | 0.9436 | 0.92 | 0.9316 | 0.084 |
| No log | 2.93 | 1250 | 0.0237 | 0.8638 | 0.92 | 0.8910 | 0.4 |
| No log | 2.93 | 1250 | 0.0904 | 0.6464 | 0.8586 | 0.7375 | 0.002 |
| No log | 2.93 | 1250 | 0.0250 | 0.9559 | 0.975 | 0.9653 | 0.2 |
| No log | 2.93 | 1250 | 0.1178 | 0.8077 | 0.84 | 0.8235 | 0.3000 |
| No log | 2.93 | 1250 | 0.0186 | 0.9198 | 0.86 | 0.8889 | 0.7000 |
| No log | 2.93 | 1250 | 0.0663 | 0.7547 | 0.8 | 0.7767 | 0.4 |
| No log | 2.93 | 1250 | 0.0218 | 0.8974 | 0.875 | 0.8861 | 0.4 |
| No log | 2.93 | 1250 | 0.0739 | 0.8571 | 0.84 | 0.8485 | 0.4 |
| No log | 2.93 | 1250 | 0.0874 | 0.7802 | 0.905 | 0.8380 | 0.3000 |
| No log | 2.93 | 1250 | 0.0600 | 0.7122 | 0.4975 | 0.5858 | 0.6 |
| No log | 2.93 | 1250 | 0.0507 | 0.7939 | 0.905 | 0.8458 | 0.3000 |
| No log | 2.93 | 1250 | 0.0443 | 0.8095 | 0.935 | 0.8677 | 0.3000 |
| No log | 2.93 | 1250 | 0.0917 | 0.7689 | 0.865 | 0.8141 | 0.3000 |
| No log | 2.93 | 1250 | 0.0432 | 0.8443 | 0.895 | 0.8689 | 0.5 |
| No log | 2.93 | 1250 | 0.0252 | 0.9072 | 0.88 | 0.8934 | 0.6 |
| No log | 2.93 | 1250 | 0.0664 | 0.7788 | 0.845 | 0.8106 | 0.4 |
| No log | 2.93 | 1250 | 0.0598 | 0.8679 | 0.92 | 0.8932 | 0.4 |
| No log | 2.93 | 1250 | 0.0567 | 0.9021 | 0.875 | 0.8883 | 0.7000 |
| No log | 2.93 | 1250 | 0.0465 | 0.8122 | 0.865 | 0.8378 | 0.5 |
| No log | 2.93 | 1250 | 0.0344 | 0.8789 | 0.8392 | 0.8586 | 0.6 |
| No log | 2.93 | 1250 | 0.0602 | 0.7277 | 0.815 | 0.7689 | 0.4 |
| No log | 2.93 | 1250 | 0.0737 | 0.7929 | 0.785 | 0.7889 | 0.6 |
| No log | 2.93 | 1250 | 0.0569 | 0.8763 | 0.85 | 0.8629 | 0.6 |
| No log | 2.93 | 1250 | 0.0428 | 0.8157 | 0.885 | 0.8489 | 0.3000 |
| No log | 2.93 | 1250 | 0.1329 | 0.8458 | 0.85 | 0.8479 | 0.2 |
| No log | 2.93 | 1250 | 0.0249 | 0.7963 | 0.86 | 0.8269 | 0.4 |
| No log | 2.93 | 1250 | 0.0321 | 0.8990 | 0.9036 | 0.9013 | 0.2 |
| No log | 2.93 | 1250 | 0.0664 | 0.8246 | 0.94 | 0.8785 | 0.3000 |
| No log | 2.93 | 1250 | 0.0761 | 0.7673 | 0.7789 | 0.7731 | 0.4 |
| No log | 2.93 | 1250 | 0.0222 | 0.7874 | 0.815 | 0.8010 | 0.3000 |
| No log | 2.93 | 1250 | 0.0843 | 0.7397 | 0.895 | 0.8100 | 0.4 |
| No log | 2.93 | 1250 | 0.0317 | 0.8827 | 0.79 | 0.8338 | 0.6 |
| No log | 2.93 | 1250 | 0.0608 | 0.8696 | 0.8 | 0.8333 | 0.6 |
| No log | 2.93 | 1250 | 0.0715 | 0.8705 | 0.84 | 0.8550 | 0.6 |
| No log | 2.93 | 1250 | 0.1113 | 0.7425 | 0.865 | 0.7991 | 0.4 |
| No log | 2.93 | 1250 | 0.0726 | 0.8263 | 0.785 | 0.8051 | 0.4 |
| No log | 2.93 | 1250 | 0.0607 | 0.8244 | 0.845 | 0.8346 | 0.5 |
| No log | 2.93 | 1250 | 0.0487 | 0.8054 | 0.8945 | 0.8476 | 0.4 |
| No log | 2.93 | 1250 | 0.1693 | 0.6640 | 0.82 | 0.7338 | 0.093 |
| No log | 2.93 | 1250 | 0.0755 | 0.5393 | 0.515 | 0.5269 | 0.4 |
| No log | 2.93 | 1250 | 0.0950 | 0.9140 | 0.85 | 0.8808 | 0.4 |
| No log | 2.93 | 1250 | 0.2408 | 0.3272 | 0.8040 | 0.4651 | 0.001 |
| No log | 2.93 | 1250 | 0.0749 | 0.8852 | 0.8141 | 0.8482 | 0.4 |
| No log | 2.93 | 1250 | 0.0854 | 0.7284 | 0.885 | 0.7991 | 0.3000 |
| No log | 2.93 | 1250 | 0.0253 | 0.9278 | 0.9091 | 0.9184 | 0.2 |
| No log | 2.93 | 1250 | 0.0653 | 0.9137 | 0.9 | 0.9068 | 0.5 |
| No log | 2.93 | 1250 | 0.0265 | 0.86 | 0.8643 | 0.8622 | 0.4 |
| No log | 2.93 | 1250 | 0.0351 | 0.9198 | 0.745 | 0.8232 | 0.8 |
| No log | 2.93 | 1250 | 0.0405 | 0.7462 | 0.735 | 0.7406 | 0.5 |
| No log | 2.93 | 1250 | 0.0618 | 0.8366 | 0.845 | 0.8408 | 0.6 |
| No log | 2.93 | 1250 | 0.0598 | 0.6314 | 0.865 | 0.7300 | 0.3000 |
| No log | 2.93 | 1250 | 0.0537 | 0.9272 | 0.955 | 0.9409 | 0.3000 |
| No log | 2.93 | 1250 | 0.0533 | 0.8930 | 0.835 | 0.8630 | 0.7000 |
| No log | 2.93 | 1250 | 0.1265 | 0.7054 | 0.85 | 0.7710 | 0.4 |
| No log | 2.93 | 1250 | 0.0204 | 0.8596 | 0.7387 | 0.7946 | 0.8 |
| No log | 2.93 | 1250 | 0.1870 | 0.5634 | 0.7588 | 0.6467 | 0.025 |
| No log | 2.93 | 1250 | 0.0949 | 0.8796 | 0.84 | 0.8593 | 0.5 |
| No log | 2.93 | 1250 | 0.0499 | 0.8424 | 0.855 | 0.8486 | 0.6 |
| No log | 2.93 | 1250 | 0.0597 | 0.8725 | 0.89 | 0.8812 | 0.5 |
| No log | 2.93 | 1250 | 0.0988 | 0.8098 | 0.83 | 0.8198 | 0.2 |
| No log | 2.93 | 1250 | 0.0557 | 0.8939 | 0.8 | 0.8443 | 0.7000 |
| No log | 2.93 | 1250 | 0.1298 | 0.7391 | 0.935 | 0.8256 | 0.084 |
| No log | 2.93 | 1250 | 0.0369 | 0.7934 | 0.96 | 0.8688 | 0.4 |
| No log | 2.93 | 1250 | 0.0567 | 0.8737 | 0.865 | 0.8693 | 0.5 |
| No log | 2.93 | 1250 | 0.0158 | 0.8638 | 0.92 | 0.8910 | 0.5 |
| No log | 2.93 | 1250 | 0.0956 | 0.8 | 0.82 | 0.8099 | 0.5 |
| No log | 2.93 | 1250 | 0.0361 | 0.8641 | 0.89 | 0.8768 | 0.6 |
| No log | 2.93 | 1250 | 0.0573 | 0.8796 | 0.84 | 0.8593 | 0.6 |
| No log | 2.93 | 1250 | 0.0363 | 0.8836 | 0.835 | 0.8586 | 0.6 |
| No log | 2.93 | 1250 | 0.0881 | 0.815 | 0.815 | 0.815 | 0.5 |
| No log | 2.93 | 1250 | 0.0366 | 0.905 | 0.905 | 0.905 | 0.6 |
| No log | 2.93 | 1250 | 0.0707 | 0.7111 | 0.64 | 0.6737 | 0.3000 |
| No log | 2.93 | 1250 | 0.1180 | 0.8198 | 0.705 | 0.7581 | 0.5 |
| No log | 2.93 | 1250 | 0.0270 | 0.8889 | 0.84 | 0.8638 | 0.4 |
| No log | 2.93 | 1250 | 0.0707 | 0.8474 | 0.805 | 0.8256 | 0.7000 |
| No log | 2.93 | 1250 | 0.0879 | 0.8564 | 0.805 | 0.8299 | 0.6 |
| No log | 2.93 | 1250 | 0.0520 | 0.8646 | 0.83 | 0.8469 | 0.7000 |
| No log | 2.93 | 1250 | 0.0237 | 0.8744 | 0.87 | 0.8722 | 0.6 |
| No log | 2.93 | 1250 | 0.0159 | 0.8667 | 0.91 | 0.8878 | 0.3000 |
| No log | 2.93 | 1250 | 0.1066 | 0.8010 | 0.785 | 0.7929 | 0.6 |
| No log | 2.93 | 1250 | 0.0601 | 0.6872 | 0.67 | 0.6785 | 0.4 |
| No log | 2.93 | 1250 | 0.0887 | 0.8762 | 0.92 | 0.8976 | 0.068 |
| No log | 2.93 | 1250 | 0.1065 | 0.4278 | 0.77 | 0.55 | 0.046 |
| No log | 2.93 | 1250 | 0.0610 | 0.7568 | 0.8485 | 0.8000 | 0.093 |
| No log | 2.93 | 1250 | 0.0639 | 0.8906 | 0.855 | 0.8724 | 0.3000 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 2.93 | 1250 | 0.0093 | 0.7946 | 0.9036 | 0.8456 | 0.6 |
| No log | 2.93 | 1250 | 0.0035 | 0.9657 | 0.985 | 0.9752 | 0.5 |
| No log | 2.93 | 1250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.005 |
| No log | 2.93 | 1250 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.0430 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.016 |
| No log | 2.93 | 1250 | 0.0024 | 0.9947 | 1.0 | 0.9973 | 0.058 |
| No log | 2.93 | 1250 | 0.0012 | 1.0 | 0.99 | 0.9950 | 0.6 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 2.93 | 1250 | 0.0032 | 0.9949 | 0.985 | 0.9899 | 0.9 |
| No log | 2.93 | 1250 | 0.0006 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 2.93 | 1250 | 0.0092 | 0.975 | 0.975 | 0.975 | 0.005 |
| No log | 2.93 | 1250 | 0.0168 | 0.9894 | 0.935 | 0.9614 | 0.079 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.076 |
| No log | 2.93 | 1250 | 0.0155 | 0.9742 | 0.945 | 0.9594 | 0.2 |
| No log | 2.93 | 1250 | 0.0008 | 0.9950 | 1.0 | 0.9975 | 0.021 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.006 |
| No log | 2.93 | 1250 | 0.0030 | 0.9949 | 0.985 | 0.9899 | 0.3000 |
| No log | 2.93 | 1250 | 0.0012 | 0.995 | 0.995 | 0.995 | 0.4 |
| No log | 2.93 | 1250 | 0.0035 | 0.9704 | 0.985 | 0.9777 | 0.5 |
| No log | 2.93 | 1250 | 0.0359 | 0.9218 | 0.825 | 0.8707 | 0.7000 |
| No log | 2.93 | 1250 | 0.0005 | 0.9950 | 1.0 | 0.9975 | 0.016 |
| No log | 2.93 | 1250 | 0.0014 | 1.0 | 0.985 | 0.9924 | 0.4 |
| No log | 2.93 | 1250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.0100 |
| No log | 2.93 | 1250 | 0.0005 | 1.0 | 0.995 | 0.9975 | 0.7000 |
| No log | 2.93 | 1250 | 0.0011 | 0.995 | 0.995 | 0.995 | 0.3000 |
| No log | 2.93 | 1250 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.096 |
| No log | 2.93 | 1250 | 0.0049 | 0.9375 | 0.975 | 0.9559 | 0.5 |
| No log | 2.93 | 1250 | 0.0020 | 0.9901 | 1.0 | 0.9950 | 0.07 |
| No log | 2.93 | 1250 | 0.0183 | 0.9796 | 0.96 | 0.9697 | 0.0090 |
| No log | 2.93 | 1250 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 2.93 | 1250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.004 |
| No log | 2.93 | 1250 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 2.93 | 1250 | 0.0020 | 0.9901 | 1.0 | 0.9950 | 0.2 |
| No log | 2.93 | 1250 | 0.0005 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 2.93 | 1250 | 0.0049 | 0.9792 | 1.0 | 0.9895 | 0.023 |
| No log | 2.93 | 1250 | 0.0037 | 0.9423 | 0.98 | 0.9608 | 0.2 |
| No log | 2.93 | 1250 | 0.0042 | 0.9804 | 1.0 | 0.9901 | 0.5 |
| No log | 2.93 | 1250 | 0.0010 | 0.9950 | 1.0 | 0.9975 | 0.6 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 2.93 | 1250 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 2.93 | 1250 | 0.0016 | 0.9950 | 0.99 | 0.9925 | 0.074 |
| No log | 2.93 | 1250 | 0.0178 | 0.9310 | 0.9545 | 0.9426 | 0.7000 |
| No log | 2.93 | 1250 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.8 |
| No log | 2.93 | 1250 | 0.0022 | 0.99 | 0.99 | 0.99 | 0.7000 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.004 |
| No log | 2.93 | 1250 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.069 |
| No log | 2.93 | 1250 | 0.0154 | 0.9346 | 1.0 | 0.9662 | 0.005 |
| No log | 2.93 | 1250 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.5 |
| No log | 2.93 | 1250 | 0.0169 | 0.9378 | 0.905 | 0.9211 | 0.5 |
| No log | 2.93 | 1250 | 0.0040 | 0.9950 | 1.0 | 0.9975 | 0.002 |
| No log | 2.93 | 1250 | 0.0010 | 0.9950 | 1.0 | 0.9975 | 0.8 |
| No log | 2.93 | 1250 | 0.0011 | 0.9901 | 1.0 | 0.9950 | 0.078 |
| No log | 2.93 | 1250 | 0.0005 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 2.93 | 1250 | 0.0121 | 0.7941 | 0.945 | 0.8630 | 0.0430 |
| No log | 2.93 | 1250 | 0.0116 | 1.0 | 0.99 | 0.9950 | 0.0180 |
| No log | 2.93 | 1250 | 0.0068 | 0.98 | 0.98 | 0.98 | 0.2 |
| No log | 2.93 | 1250 | 0.0161 | 0.7083 | 0.8629 | 0.7780 | 0.5 |
| No log | 2.93 | 1250 | 0.0064 | 0.9147 | 0.965 | 0.9392 | 0.3000 |
| No log | 2.93 | 1250 | 0.0376 | 0.9171 | 0.885 | 0.9008 | 0.6 |
| No log | 2.93 | 1250 | 0.1399 | 0.5974 | 0.8214 | 0.6917 | 0.055 |
| No log | 2.93 | 1250 | 0.0056 | 0.955 | 0.955 | 0.955 | 0.6 |
| No log | 2.93 | 1250 | 0.0888 | 0.7885 | 0.8723 | 0.8283 | 0.3000 |
| No log | 2.93 | 1250 | 0.0270 | 0.8421 | 0.96 | 0.8972 | 0.2 |
| No log | 2.93 | 1250 | 0.0279 | 0.8872 | 0.865 | 0.8759 | 0.4 |
| No log | 2.93 | 1250 | 0.0275 | 0.8786 | 0.905 | 0.8916 | 0.3000 |
| No log | 2.93 | 1250 | 0.0257 | 0.9113 | 0.925 | 0.9181 | 0.6 |
| No log | 2.93 | 1250 | 0.0236 | 0.9451 | 0.86 | 0.9005 | 0.5 |
| No log | 2.93 | 1250 | 0.0325 | 0.8950 | 0.81 | 0.8504 | 0.5 |
| No log | 2.93 | 1250 | 0.0183 | 0.8233 | 0.885 | 0.8530 | 0.4 |
| No log | 2.93 | 1250 | 0.0395 | 0.8848 | 0.73 | 0.8 | 0.7000 |
| No log | 2.93 | 1250 | 0.0487 | 0.8614 | 0.87 | 0.8657 | 0.4 |
| No log | 2.93 | 1250 | 0.0022 | 0.9950 | 1.0 | 0.9975 | 0.08 |
| No log | 2.93 | 1250 | 0.0322 | 0.8846 | 0.805 | 0.8429 | 0.7000 |
| No log | 2.93 | 1250 | 0.0265 | 0.8309 | 0.86 | 0.8452 | 0.3000 |
| No log | 2.93 | 1250 | 0.0280 | 0.7961 | 0.8283 | 0.8119 | 0.4 |
| No log | 2.93 | 1250 | 0.0623 | 0.7317 | 0.7538 | 0.7426 | 0.5 |
| No log | 2.93 | 1250 | 0.0245 | 0.8173 | 0.85 | 0.8333 | 0.3000 |
| No log | 2.93 | 1250 | 0.0182 | 0.9010 | 0.91 | 0.9055 | 0.3000 |
| No log | 2.93 | 1250 | 0.0065 | 0.9646 | 0.955 | 0.9598 | 0.5 |
| No log | 2.93 | 1250 | 0.0314 | 0.8168 | 0.825 | 0.8209 | 0.6 |
| No log | 2.93 | 1250 | 0.0336 | 0.7965 | 0.9 | 0.8451 | 0.2 |
| No log | 2.93 | 1250 | 0.0281 | 0.9115 | 0.875 | 0.8929 | 0.6 |
| No log | 2.93 | 1250 | 0.0238 | 0.7441 | 0.785 | 0.7640 | 0.4 |
| No log | 2.93 | 1250 | 0.0272 | 0.8429 | 0.885 | 0.8634 | 0.4 |
| No log | 2.93 | 1250 | 0.0300 | 0.9113 | 0.925 | 0.9181 | 0.085 |
| No log | 2.93 | 1250 | 0.0641 | 0.7423 | 0.72 | 0.7310 | 0.4 |
| No log | 2.93 | 1250 | 0.0104 | 0.9899 | 0.98 | 0.9849 | 0.3000 |
| No log | 2.93 | 1250 | 0.0012 | 0.9901 | 1.0 | 0.9950 | 0.015 |
| No log | 2.93 | 1250 | 0.0147 | 0.9652 | 0.97 | 0.9676 | 0.3000 |
| No log | 2.93 | 1250 | 0.0431 | 0.8069 | 0.815 | 0.8109 | 0.3000 |
| No log | 2.93 | 1250 | 0.0464 | 0.7357 | 0.835 | 0.7822 | 0.4 |
| No log | 2.93 | 1250 | 0.1084 | 0.6739 | 0.6596 | 0.6667 | 0.5 |
| No log | 2.93 | 1250 | 0.0323 | 0.8171 | 0.67 | 0.7363 | 0.5 |
| No log | 2.93 | 1250 | 0.0135 | 0.9495 | 0.94 | 0.9447 | 0.5 |
| No log | 2.93 | 1250 | 0.0255 | 0.9122 | 0.935 | 0.9235 | 0.4 |
| No log | 2.93 | 1250 | 0.0035 | 0.9949 | 0.985 | 0.9899 | 0.7000 |
| No log | 2.93 | 1250 | 0.0263 | 0.8645 | 0.925 | 0.8937 | 0.2 |
| No log | 2.93 | 1250 | 0.0308 | 0.6466 | 0.805 | 0.7171 | 0.4 |
| No log | 2.93 | 1250 | 0.0650 | 0.7286 | 0.7846 | 0.7556 | 0.4 |
| No log | 2.93 | 1250 | 0.0204 | 0.9347 | 0.93 | 0.9323 | 0.4 |
| No log | 2.93 | 1250 | 0.0202 | 0.8973 | 0.83 | 0.8623 | 0.7000 |
| No log | 2.93 | 1250 | 0.0026 | 1.0 | 1.0 | 1.0 | 0.4 |
| No log | 2.93 | 1250 | 0.0106 | 0.9444 | 0.935 | 0.9397 | 0.5 |
| No log | 2.93 | 1250 | 0.0335 | 0.9206 | 0.87 | 0.8946 | 0.5 |
| No log | 2.93 | 1250 | 0.0056 | 0.9583 | 0.9583 | 0.9583 | 0.4 |
| No log | 2.93 | 1250 | 0.0179 | 0.9508 | 0.87 | 0.9086 | 0.6 |
| No log | 2.93 | 1250 | 0.0683 | 0.8579 | 0.815 | 0.8359 | 0.3000 |
| No log | 2.93 | 1250 | 0.0353 | 0.7524 | 0.8061 | 0.7783 | 0.4 |
| No log | 2.93 | 1250 | 0.0034 | 0.9852 | 1.0 | 0.9926 | 0.4 |
| No log | 2.93 | 1250 | 0.0788 | 0.775 | 0.775 | 0.775 | 0.5 |
| No log | 2.93 | 1250 | 0.0367 | 0.5762 | 0.435 | 0.4957 | 0.3000 |
| No log | 2.93 | 1250 | 0.0292 | 0.9189 | 0.85 | 0.8831 | 0.2 |
| No log | 2.93 | 1250 | 0.1175 | 0.6840 | 0.79 | 0.7332 | 0.3000 |
| No log | 2.93 | 1250 | 0.0845 | 0.6380 | 0.705 | 0.6698 | 0.5 |
| No log | 2.93 | 1250 | 0.0696 | 0.6443 | 0.815 | 0.7196 | 0.0730 |
| No log | 2.93 | 1250 | 0.1093 | 0.7011 | 0.915 | 0.7939 | 0.04 |
| No log | 2.93 | 1250 | 0.0433 | 0.6906 | 0.96 | 0.8033 | 0.2 |
| No log | 2.93 | 1250 | 0.0390 | 0.7955 | 0.875 | 0.8333 | 0.4 |
| No log | 2.93 | 1250 | 0.0451 | 0.4958 | 0.5930 | 0.5400 | 0.3000 |
| No log | 2.93 | 1250 | 0.0529 | 0.7949 | 0.775 | 0.7848 | 0.4 |
| No log | 2.93 | 1250 | 0.0531 | 0.6389 | 0.8364 | 0.7244 | 0.4 |
| No log | 2.93 | 1250 | 0.0401 | 0.8246 | 0.87 | 0.8467 | 0.3000 |
| No log | 2.93 | 1250 | 0.0401 | 0.8246 | 0.87 | 0.8467 | 0.3000 |
| No log | 2.93 | 1250 | 0.0398 | 0.7991 | 0.855 | 0.8261 | 0.3000 |
| No log | 2.93 | 1250 | 0.0521 | 0.8054 | 0.89 | 0.8456 | 0.2 |
| No log | 2.93 | 1250 | 0.0352 | 0.7655 | 0.865 | 0.8122 | 0.2 |
| No log | 2.93 | 1250 | 0.0392 | 0.8308 | 0.81 | 0.8203 | 0.5 |
| No log | 2.93 | 1250 | 0.0879 | 0.7026 | 0.815 | 0.7546 | 0.2 |
| No log | 2.93 | 1250 | 0.0456 | 0.7571 | 0.795 | 0.7756 | 0.4 |
| No log | 2.93 | 1250 | 0.0443 | 0.8009 | 0.905 | 0.8498 | 0.2 |
| No log | 2.93 | 1250 | 0.1638 | 0.6618 | 0.675 | 0.6683 | 0.2 |
| No log | 2.93 | 1250 | 0.0513 | 0.8075 | 0.755 | 0.7804 | 0.5 |
| No log | 2.93 | 1250 | 0.1173 | 0.7004 | 0.8342 | 0.7615 | 0.078 |
| No log | 2.93 | 1250 | 0.0355 | 0.8488 | 0.87 | 0.8593 | 0.3000 |
| No log | 2.93 | 1250 | 0.0355 | 0.8488 | 0.87 | 0.8593 | 0.3000 |
| No log | 2.93 | 1250 | 0.0278 | 0.8611 | 0.6739 | 0.7561 | 0.6 |
| No log | 2.93 | 1250 | 0.0278 | 0.8611 | 0.6739 | 0.7561 | 0.6 |
| No log | 2.93 | 1250 | 0.0372 | 0.8556 | 0.8 | 0.8269 | 0.4 |
| No log | 2.93 | 1250 | 0.0436 | 0.5326 | 0.49 | 0.5104 | 0.4 |
| No log | 2.93 | 1250 | 0.0543 | 0.4583 | 0.8462 | 0.5946 | 0.079 |
| No log | 2.93 | 1250 | 0.0329 | 0.7571 | 0.795 | 0.7756 | 0.5 |
| No log | 2.93 | 1250 | 0.0363 | 0.6199 | 0.685 | 0.6508 | 0.4 |
| No log | 2.93 | 1250 | 0.0533 | 0.7336 | 0.785 | 0.7585 | 0.3000 |
| No log | 2.93 | 1250 | 0.0408 | 0.7703 | 0.805 | 0.7873 | 0.6 |
| No log | 2.93 | 1250 | 0.0598 | 0.7075 | 0.895 | 0.7903 | 0.2 |
| No log | 2.93 | 1250 | 0.1094 | 0.4933 | 0.3558 | 0.4134 | 0.3000 |
| No log | 2.93 | 1250 | 0.0787 | 0.7178 | 0.865 | 0.7846 | 0.2 |
| No log | 2.93 | 1250 | 0.0527 | 0.7682 | 0.845 | 0.8048 | 0.5 |
| No log | 2.93 | 1250 | 0.1015 | 0.7677 | 0.76 | 0.7638 | 0.7000 |
| No log | 2.93 | 1250 | 0.1224 | 0.7130 | 0.82 | 0.7628 | 0.8 |
| No log | 2.93 | 1250 | 0.0743 | 0.6326 | 0.835 | 0.7198 | 0.2 |
| No log | 2.93 | 1250 | 0.1311 | 0.7948 | 0.91 | 0.8485 | 0.066 |
| No log | 2.93 | 1250 | 0.1690 | 0.3813 | 0.53 | 0.4435 | 0.007 |
| No log | 2.93 | 1250 | 0.0683 | 0.5067 | 0.565 | 0.5343 | 0.3000 |
| No log | 2.93 | 1250 | 0.0764 | 0.7846 | 0.965 | 0.8655 | 0.5 |
| No log | 2.93 | 1250 | 0.0525 | 0.4941 | 0.625 | 0.5519 | 0.035 |
| No log | 2.93 | 1250 | 0.0609 | 0.7719 | 0.88 | 0.8224 | 0.2 |
| No log | 2.93 | 1250 | 0.0620 | 1.0 | 0.6667 | 0.8 | 0.6 |
| No log | 2.93 | 1250 | 0.0491 | 0.7218 | 0.895 | 0.7991 | 0.2 |
| No log | 2.93 | 1250 | 0.0400 | 0.8543 | 0.85 | 0.8521 | 0.4 |
| No log | 2.93 | 1250 | 0.0685 | 0.5930 | 0.845 | 0.6969 | 0.08 |
| No log | 2.93 | 1250 | 0.0483 | 0.7837 | 0.815 | 0.7990 | 0.4 |
| No log | 2.93 | 1250 | 0.0501 | 0.5163 | 0.635 | 0.5695 | 0.4 |
| No log | 2.93 | 1250 | 0.0281 | 0.8837 | 0.95 | 0.9157 | 0.2 |
| No log | 2.93 | 1250 | 0.0968 | 0.4542 | 0.57 | 0.5055 | 0.5 |
| No log | 2.93 | 1250 | 0.0514 | 0.7824 | 0.8492 | 0.8145 | 0.4 |
| No log | 2.93 | 1250 | 0.1047 | 0.5730 | 0.4951 | 0.5312 | 0.3000 |
| No log | 2.93 | 1250 | 0.1028 | 0.4599 | 0.63 | 0.5316 | 0.0190 |
| No log | 2.93 | 1250 | 0.0395 | 0.8634 | 0.885 | 0.8741 | 0.5 |
| No log | 2.93 | 1250 | 0.0624 | 0.7727 | 0.935 | 0.8462 | 0.099 |
| No log | 2.93 | 1250 | 0.0624 | 0.7727 | 0.935 | 0.8462 | 0.099 |
| No log | 2.93 | 1250 | 0.0479 | 0.7011 | 0.61 | 0.6524 | 0.3000 |
| No log | 2.93 | 1250 | 0.0632 | 0.8026 | 0.9242 | 0.8592 | 0.3000 |
| No log | 2.93 | 1250 | 0.0469 | 0.6113 | 0.8693 | 0.7178 | 0.2 |
| No log | 2.93 | 1250 | 0.0531 | 0.7580 | 0.83 | 0.7924 | 0.4 |
| No log | 2.93 | 1250 | 0.0487 | 0.8122 | 0.8 | 0.8060 | 0.3000 |
| No log | 2.93 | 1250 | 0.0417 | 0.7752 | 0.845 | 0.8086 | 0.3000 |
| No log | 2.93 | 1250 | 0.0516 | 0.7031 | 0.805 | 0.7506 | 0.6 |
| No log | 2.93 | 1250 | 0.0501 | 0.6760 | 0.97 | 0.7967 | 0.085 |
| No log | 2.93 | 1250 | 0.0404 | 0.8233 | 0.885 | 0.8530 | 0.3000 |
| No log | 2.93 | 1250 | 0.0511 | 0.7677 | 0.76 | 0.7638 | 0.5 |
| No log | 2.93 | 1250 | 0.0323 | 0.8601 | 0.83 | 0.8448 | 0.4 |
| No log | 2.93 | 1250 | 0.0683 | 0.7761 | 0.78 | 0.7781 | 0.3000 |
| No log | 2.93 | 1250 | 0.0529 | 0.4094 | 0.7 | 0.5166 | 0.0730 |
| No log | 2.93 | 1250 | 0.0599 | 0.5579 | 0.675 | 0.6109 | 0.5 |
| No log | 2.93 | 1250 | 0.0541 | 0.7020 | 0.895 | 0.7868 | 0.0510 |
| No log | 2.93 | 1250 | 0.0635 | 0.6109 | 0.785 | 0.6871 | 0.6 |
| No log | 2.93 | 1250 | 0.0579 | 0.7713 | 0.86 | 0.8132 | 0.4 |
| No log | 2.93 | 1250 | 0.0394 | 0.7682 | 0.845 | 0.8048 | 0.3000 |
| No log | 2.93 | 1250 | 0.1085 | 0.5036 | 0.69 | 0.5823 | 0.3000 |
| No log | 2.93 | 1250 | 0.0718 | 0.7861 | 0.79 | 0.7880 | 0.4 |
| No log | 2.93 | 1250 | 0.0482 | 0.6516 | 0.795 | 0.7162 | 0.4 |
| No log | 2.93 | 1250 | 0.0482 | 0.6516 | 0.795 | 0.7162 | 0.4 |
| No log | 2.93 | 1250 | 0.0482 | 0.6516 | 0.795 | 0.7162 | 0.4 |
| No log | 2.93 | 1250 | 0.0482 | 0.6516 | 0.795 | 0.7162 | 0.4 |
| No log | 2.93 | 1250 | 0.1464 | 0.4810 | 0.5758 | 0.5241 | 0.0720 |
| No log | 2.93 | 1250 | 0.0629 | 0.7465 | 0.8182 | 0.7807 | 0.6 |
| No log | 2.93 | 1250 | 0.0212 | 0.9314 | 0.95 | 0.9406 | 0.2 |
| No log | 2.93 | 1250 | 0.0020 | 0.9901 | 1.0 | 0.9950 | 0.3000 |
| No log | 2.93 | 1250 | 0.0029 | 1.0 | 0.995 | 0.9975 | 0.3000 |
| No log | 2.93 | 1250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 2.93 | 1250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.066 |
| No log | 2.93 | 1250 | 0.0004 | 1.0 | 0.995 | 0.9975 | 0.6 |
| No log | 2.93 | 1250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.067 |
| No log | 2.93 | 1250 | 0.0031 | 0.9851 | 0.99 | 0.9875 | 0.8 |
| No log | 2.93 | 1250 | 0.0006 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 2.93 | 1250 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.3000 |
| No log | 2.93 | 1250 | 0.0212 | 0.9541 | 0.935 | 0.9444 | 0.096 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.048 |
| No log | 2.93 | 1250 | 0.0418 | 0.8018 | 0.89 | 0.8436 | 0.0600 |
| No log | 2.93 | 1250 | 0.0017 | 0.9901 | 1.0 | 0.9950 | 0.0180 |
| No log | 2.93 | 1250 | 0.0004 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 2.93 | 1250 | 0.0034 | 0.9851 | 0.995 | 0.9900 | 0.9 |
| No log | 2.93 | 1250 | 0.0056 | 0.9653 | 0.975 | 0.9701 | 0.6 |
| No log | 2.93 | 1250 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.0220 |
| No log | 2.93 | 1250 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.077 |
| No log | 2.93 | 1250 | 0.0024 | 0.9900 | 0.995 | 0.9925 | 0.2 |
| No log | 2.93 | 1250 | 0.0007 | 1.0 | 1.0 | 1.0 | 0.5 |
| No log | 2.93 | 1250 | 0.0200 | 0.9122 | 0.935 | 0.9235 | 0.025 |
| No log | 2.93 | 1250 | 0.1241 | 0.4107 | 0.575 | 0.4792 | 0.7000 |
| No log | 2.93 | 1250 | 0.0958 | 0.3934 | 0.3310 | 0.3596 | 0.4 |
| No log | 2.93 | 1250 | 0.1214 | 0.6587 | 0.685 | 0.6716 | 0.3000 |
| No log | 2.93 | 1250 | 0.1157 | 0.6058 | 0.73 | 0.6621 | 0.4 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1"], "model-index": [{"name": "v2-WtP-FT-12L-256BS-UD-Opus-cUD-cOpus", "results": []}]} | igorsterner/v2-WtP-FT-12L-256BS-UD-Opus-cUD-cOpus | null | [
"transformers",
"safetensors",
"xlm-token",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:37:55+00:00 | [] | [] | TAGS
#transformers #safetensors #xlm-token #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| v2-WtP-FT-12L-256BS-UD-Opus-cUD-cOpus
=====================================
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1157
* Precision: 0.6058
* Recall: 0.73
* F1: 0.6621
* Threshold: 0.4
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 512
* eval\_batch\_size: 512
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.1
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 512\n* eval\\_batch\\_size: 512\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #xlm-token #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 512\n* eval\\_batch\\_size: 512\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | adammoss/gpt-pretrain-lm-sn25 | null | [
"transformers",
"safetensors",
"gptmodel",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:37:58+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gptmodel #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gptmodel #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4455
- Accuracy: 0.4297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3758 | 0.3665 |
| 1.4111 | 2.0 | 776 | 1.3400 | 0.4077 |
| 1.0525 | 3.0 | 1164 | 1.4455 | 0.4297 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]} | Mouzer/clasificador-muchocine | null | [
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:40:02+00:00 | [] | [] | TAGS
#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us
| clasificador-muchocine
======================
This model is a fine-tuned version of mrm8488/electricidad-base-discriminator on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4455
* Accuracy: 0.4297
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4385
- Accuracy: 0.4413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3293 | 0.3948 |
| 1.3934 | 2.0 | 776 | 1.3066 | 0.4116 |
| 1.0283 | 3.0 | 1164 | 1.4385 | 0.4413 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]} | Anagmedina/clasificador-muchocine | null | [
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:40:26+00:00 | [] | [] | TAGS
#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us
| clasificador-muchocine
======================
This model is a fine-tuned version of mrm8488/electricidad-base-discriminator on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4385
* Accuracy: 0.4413
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-analysis-model
This model is a fine-tuned version of [distilbert/distilbert-base-multilingual-cased](https://huggingface.co/distilbert/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4125
- Accuracy: 0.8433
- Precision: 0.8181
- Recall: 0.8433
- F1: 0.8155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5521 | 1.0 | 4574 | 0.4900 | 0.8093 | 0.8041 | 0.8093 | 0.7833 |
| 0.4772 | 2.0 | 9148 | 0.4125 | 0.8433 | 0.8181 | 0.8433 | 0.8155 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall", "f1"], "base_model": "distilbert/distilbert-base-multilingual-cased", "model-index": [{"name": "sentiment-analysis-model", "results": []}]} | annavtkn/sentiment-analysis-model | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:40:26+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| sentiment-analysis-model
========================
This model is a fine-tuned version of distilbert/distilbert-base-multilingual-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4125
* Accuracy: 0.8433
* Precision: 0.8181
* Recall: 0.8433
* F1: 0.8155
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4652
- Accuracy: 0.4426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3703 | 0.3884 |
| 1.3806 | 2.0 | 776 | 1.3091 | 0.4245 |
| 0.9712 | 3.0 | 1164 | 1.4652 | 0.4426 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]} | AboGeek/clasificador-muchocine | null | [
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:40:44+00:00 | [] | [] | TAGS
#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us
| clasificador-muchocine
======================
This model is a fine-tuned version of mrm8488/electricidad-base-discriminator on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4652
* Accuracy: 0.4426
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4140
- Accuracy: 0.4477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3447 | 0.3948 |
| 1.4031 | 2.0 | 776 | 1.2922 | 0.4219 |
| 1.0011 | 3.0 | 1164 | 1.4140 | 0.4477 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]} | Jhosx/clasificador-muchocine | null | [
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:40:52+00:00 | [] | [] | TAGS
#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us
| clasificador-muchocine
======================
This model is a fine-tuned version of mrm8488/electricidad-base-discriminator on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4140
* Accuracy: 0.4477
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4294
- Accuracy: 0.4310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3438 | 0.4142 |
| 1.391 | 2.0 | 776 | 1.3130 | 0.4219 |
| 1.0162 | 3.0 | 1164 | 1.4294 | 0.4310 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]} | mmarquez/clasificador-muchocine | null | [
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:41:06+00:00 | [] | [] | TAGS
#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us
| clasificador-muchocine
======================
This model is a fine-tuned version of mrm8488/electricidad-base-discriminator on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4294
* Accuracy: 0.4310
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers | # meta-llama/Meta-Llama-3-8B-Instruct AWQ
- Model creator: [meta-llama](https://huggingface.co/meta-llama)
- Original model: [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/Meta-Llama-3-8B-Instruct-AWQ"
system_message = "You are Meta-Llama-3-8B-Instruct, incarnated as a powerful AI. You were created by meta-llama."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
| {"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/Meta-Llama-3-8B-Instruct-AWQ | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"conversational",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T21:41:21+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #text-generation-inference #region-us
| # meta-llama/Meta-Llama-3-8B-Instruct AWQ
- Model creator: meta-llama
- Original model: Meta-Llama-3-8B-Instruct
## How to use
### Install the necessary packages
### Example Python code
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- Text Generation Webui - using Loader: AutoAWQ
- vLLM - version 0.2.2 or later for support for all model types.
- Hugging Face Text Generation Inference (TGI)
- Transformers version 4.35.0 and later, from any code or client that supports Transformers
- AutoAWQ - for use from Python code
| [
"# meta-llama/Meta-Llama-3-8B-Instruct AWQ\n\n- Model creator: meta-llama\n- Original model: Meta-Llama-3-8B-Instruct",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #text-generation-inference #region-us \n",
"# meta-llama/Meta-Llama-3-8B-Instruct AWQ\n\n- Model creator: meta-llama\n- Original model: Meta-Llama-3-8B-Instruct",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["classification", "generated_from_trainer"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]} | mariaesther/clasificador-muchocine | null | [
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:41:25+00:00 | [] | [] | TAGS
#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us
|
# clasificador-muchocine
This model is a fine-tuned version of mrm8488/electricidad-base-discriminator on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# clasificador-muchocine\n\nThis model is a fine-tuned version of mrm8488/electricidad-base-discriminator on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us \n",
"# clasificador-muchocine\n\nThis model is a fine-tuned version of mrm8488/electricidad-base-discriminator on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4404
- Accuracy: 0.4310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3463 | 0.4103 |
| 1.3791 | 2.0 | 776 | 1.3135 | 0.4245 |
| 0.9907 | 3.0 | 1164 | 1.4404 | 0.4310 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]} | qwerasd-qweasd/clasificador-muchocine | null | [
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:41:33+00:00 | [] | [] | TAGS
#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us
| clasificador-muchocine
======================
This model is a fine-tuned version of mrm8488/electricidad-base-discriminator on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4404
* Accuracy: 0.4310
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "diffusers"} | rubbrband/awpainting_v11 | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-23T21:42:23+00:00 | [
"1910.09700"
] | [] | TAGS
#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4711
- Accuracy: 0.44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3508 | 0.4052 |
| 1.376 | 2.0 | 776 | 1.3100 | 0.4232 |
| 0.9589 | 3.0 | 1164 | 1.4711 | 0.44 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]} | Arckmonde/clasificador-muchocine | null | [
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:42:47+00:00 | [] | [] | TAGS
#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us
| clasificador-muchocine
======================
This model is a fine-tuned version of mrm8488/electricidad-base-discriminator on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4711
* Accuracy: 0.44
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1523
- Accuracy: 0.4490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.5191 | 0.4529 |
| 0.8375 | 2.0 | 776 | 1.7402 | 0.4387 |
| 0.5269 | 3.0 | 1164 | 2.1523 | 0.4490 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]} | prissila/clasificador-muchocine | null | [
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:42:53+00:00 | [] | [] | TAGS
#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us
| clasificador-muchocine
======================
This model is a fine-tuned version of mrm8488/electricidad-base-discriminator on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.1523
* Accuracy: 0.4490
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** baris-yazici
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | baris-yazici/mistral7b_fake_news_detect | null | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:43:08+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: baris-yazici
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: baris-yazici\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: baris-yazici\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4601
- Accuracy: 0.4297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3781 | 0.3716 |
| 1.3815 | 2.0 | 776 | 1.3322 | 0.4155 |
| 1.0246 | 3.0 | 1164 | 1.4601 | 0.4297 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]} | elwilnor/clasificador-muchocine | null | [
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:43:39+00:00 | [] | [] | TAGS
#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us
| clasificador-muchocine
======================
This model is a fine-tuned version of mrm8488/electricidad-base-discriminator on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4601
* Accuracy: 0.4297
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4140
- Accuracy: 0.4477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3447 | 0.3948 |
| 1.4031 | 2.0 | 776 | 1.2922 | 0.4219 |
| 1.0011 | 3.0 | 1164 | 1.4140 | 0.4477 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]} | edgartenorio/clasificador-muchocine | null | [
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:43:44+00:00 | [] | [] | TAGS
#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us
| clasificador-muchocine
======================
This model is a fine-tuned version of mrm8488/electricidad-base-discriminator on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4140
* Accuracy: 0.4477
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3964
- Accuracy: 0.4310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3830 | 0.3729 |
| 1.4264 | 2.0 | 776 | 1.3051 | 0.4116 |
| 1.0769 | 3.0 | 1164 | 1.3964 | 0.4310 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]} | rednaxela8121/clasificador-muchocine | null | [
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:44:15+00:00 | [] | [] | TAGS
#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us
| clasificador-muchocine
======================
This model is a fine-tuned version of mrm8488/electricidad-base-discriminator on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3964
* Accuracy: 0.4310
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Quant Infos
## Includes latest bpe tokenizer fixes 🎉
- Updated for latest bpe pre-tokenizer fixes https://github.com/ggerganov/llama.cpp/pull/6920
- quants done with an importance matrix for improved quantization loss
- K & IQ quants in basically all variants from Q6_K down to IQ1_S
- fixed end token for instruct mode (<|eot_id|>[128009])
- Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [f4ab2a41476600a98067a9474ea8f9e6db41bcfa](https://github.com/ggerganov/llama.cpp/commit/f4ab2a41476600a98067a9474ea8f9e6db41bcfa) (master from 2024-04-29)
- Imatrtix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) dataset.
```
./imatrix -c 512 -m $model_name-f16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
```
# Original Model Card
## Model Summary
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ Phi-3 GGUF: [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat).
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
messages = [
{"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| | Phi-3-Mini-4K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 |
|---|---|---|---|---|---|---|---|---|---|
| MMLU <br>5-Shot | 68.8 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 |
| HellaSwag <br> 5-Shot | 76.7 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 |
| ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 |
| GSM-8K <br> 0-Shot; CoT | 82.5 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 |
| MedQA <br> 2-Shot | 53.8 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 |
| AGIEval <br> 0-Shot | 37.5 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 |
| TriviaQA <br> 5-Shot | 64.0 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 |
| Arc-C <br> 10-Shot | 84.9 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 |
| Arc-E <br> 10-Shot | 94.6 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 |
| PIQA <br> 5-Shot | 84.2 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 |
| SociQA <br> 5-Shot | 76.6 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 |
| BigBench-Hard <br> 0-Shot | 71.7 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 |
| WinoGrande <br> 5-Shot | 70.8 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 |
| OpenBookQA <br> 10-Shot | 83.2 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 |
| BoolQ <br> 0-Shot | 77.6 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 |
| CommonSenseQA <br> 10-Shot | 80.2 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 |
| TruthfulQA <br> 10-Shot | 65.0 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 |
| HumanEval <br> 0-Shot | 59.1 | 59.1 | 54.7 | 59.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 |
| MBPP <br> 3-Shot | 53.8 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 |
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| {"language": ["en"], "license": "mit", "tags": ["nlp", "code", "microsoft", "phi", "phi-3", "gguf", "imatrix", "importance matrix"], "base_model": "microsoft/Phi-3-mini-4k-instruct", "license_link": "LICENSE", "pipeline_tag": "text-generation"} | qwp4w3hyb/Phi-3-mini-4k-instruct-iMat-GGUF | null | [
"transformers",
"gguf",
"phi3",
"text-generation",
"nlp",
"code",
"microsoft",
"phi",
"phi-3",
"imatrix",
"importance matrix",
"conversational",
"custom_code",
"en",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:46:03+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #phi3 #text-generation #nlp #code #microsoft #phi #phi-3 #imatrix #importance matrix #conversational #custom_code #en #base_model-microsoft/Phi-3-mini-4k-instruct #license-mit #autotrain_compatible #endpoints_compatible #region-us
| Quant Infos
===========
Includes latest bpe tokenizer fixes
-----------------------------------
* Updated for latest bpe pre-tokenizer fixes URL
* quants done with an importance matrix for improved quantization loss
* K & IQ quants in basically all variants from Q6\_K down to IQ1\_S
* fixed end token for instruct mode (<|eot\_id|>[128009])
* Quantized with URL commit f4ab2a41476600a98067a9474ea8f9e6db41bcfa (master from 2024-04-29)
* Imatrtix generated with this dataset.
Original Model Card
===================
Model Summary
-------------
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
* Phi-3 Microsoft Blog
* Phi-3 Technical Report
* Phi-3 on Azure AI Studio
* Phi-3 GGUF: 4K
* Phi-3 ONNX: 4K
Intended Uses
-------------
Primary use cases
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1. Memory/compute constrained environments
2. Latency bound scenarios
3. Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
Use case considerations
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
How to Use
----------
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of 'transformers'. Until the official version is released through 'pip', ensure that you are doing one of the following:
* When loading the model, ensure that 'trust\_remote\_code=True' is passed as an argument of the 'from\_pretrained()' function.
* Update your local 'transformers' to the development version: 'pip uninstall -y transformers && pip install git+URL The previous command is an alternative to cloning and installing from the source.
The current 'transformers' version can be verified with: 'pip list | grep transformers'.
Phi-3 Mini-4K-Instruct is also available in HuggingChat.
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
For example:
where the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
Responsible AI Considerations
-----------------------------
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
* Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
* Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
* Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
* Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
* Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
* Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
* High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
* Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
* Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
* Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
Training
--------
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.
Benchmarks
----------
We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
Software
--------
* PyTorch
* DeepSpeed
* Transformers
* Flash-Attention
Hardware
--------
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from\_pretrained() with attn\_implementation="eager"
* CPU: use the GGUF quantized models 4K
* Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K
Cross Platform Support
----------------------
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
License
-------
The model is licensed under the MIT license.
Trademarks
----------
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"### Chat Format\n\n\nGiven the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.\nYou can provide the prompt as a question with a generic template as follow:\n\n\nFor example:\n\n\nwhere the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:",
"### Sample inference code\n\n\nThis code snippets show how to get quickly started with running the model on a GPU:\n\n\nResponsible AI Considerations\n-----------------------------\n\n\nLike other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:\n\n\n* Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.\n* Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.\n* Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.\n* Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.\n* Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n\nDevelopers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:\n\n\n* Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.\n* High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.\n* Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).\n* Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.\n* Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.\n\n\nTraining\n--------",
"### Model\n\n\n* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.\n* Inputs: Text. It is best suited for prompts using chat format.\n* Context length: 4K tokens\n* GPUs: 512 H100-80G\n* Training time: 7 days\n* Training data: 3.3T tokens\n* Outputs: Generated text in response to the input\n* Dates: Our models were trained between February and April 2024\n* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.",
"### Datasets\n\n\nOur training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of\n\n\n1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;\n2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);\n3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.",
"### Fine-tuning\n\n\nA basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.\n\n\nBenchmarks\n----------\n\n\nWe report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.\n\n\nAll the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.\n\n\nAs is now standard, we use few-shot prompts to evaluate the models, at temperature 0.\nThe prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.\nMore specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.\n\n\nThe number of k–shot examples is listed per-benchmark.\n\n\n\nSoftware\n--------\n\n\n* PyTorch\n* DeepSpeed\n* Transformers\n* Flash-Attention\n\n\nHardware\n--------\n\n\nNote that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:\n\n\n* NVIDIA A100\n* NVIDIA A6000\n* NVIDIA H100\n\n\nIf you want to run the model on:\n\n\n* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from\\_pretrained() with attn\\_implementation=\"eager\"\n* CPU: use the GGUF quantized models 4K\n\n\n* Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K\n\n\nCross Platform Support\n----------------------\n\n\nONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.\n\n\nOptimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. \n\nAlong with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.\n\n\nHere are some of the optimized configurations we have added:\n\n\n1. ONNX models for int4 DML: Quantized to int4 via AWQ\n2. ONNX model for fp16 CUDA\n3. ONNX model for int4 CUDA: Quantized to int4 via RTN\n4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN\n\n\nLicense\n-------\n\n\nThe model is licensed under the MIT license.\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] | [
"TAGS\n#transformers #gguf #phi3 #text-generation #nlp #code #microsoft #phi #phi-3 #imatrix #importance matrix #conversational #custom_code #en #base_model-microsoft/Phi-3-mini-4k-instruct #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Chat Format\n\n\nGiven the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.\nYou can provide the prompt as a question with a generic template as follow:\n\n\nFor example:\n\n\nwhere the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:",
"### Sample inference code\n\n\nThis code snippets show how to get quickly started with running the model on a GPU:\n\n\nResponsible AI Considerations\n-----------------------------\n\n\nLike other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:\n\n\n* Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.\n* Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.\n* Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.\n* Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.\n* Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n\nDevelopers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:\n\n\n* Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.\n* High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.\n* Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).\n* Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.\n* Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.\n\n\nTraining\n--------",
"### Model\n\n\n* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.\n* Inputs: Text. It is best suited for prompts using chat format.\n* Context length: 4K tokens\n* GPUs: 512 H100-80G\n* Training time: 7 days\n* Training data: 3.3T tokens\n* Outputs: Generated text in response to the input\n* Dates: Our models were trained between February and April 2024\n* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.",
"### Datasets\n\n\nOur training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of\n\n\n1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;\n2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);\n3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.",
"### Fine-tuning\n\n\nA basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.\n\n\nBenchmarks\n----------\n\n\nWe report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.\n\n\nAll the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.\n\n\nAs is now standard, we use few-shot prompts to evaluate the models, at temperature 0.\nThe prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.\nMore specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.\n\n\nThe number of k–shot examples is listed per-benchmark.\n\n\n\nSoftware\n--------\n\n\n* PyTorch\n* DeepSpeed\n* Transformers\n* Flash-Attention\n\n\nHardware\n--------\n\n\nNote that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:\n\n\n* NVIDIA A100\n* NVIDIA A6000\n* NVIDIA H100\n\n\nIf you want to run the model on:\n\n\n* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from\\_pretrained() with attn\\_implementation=\"eager\"\n* CPU: use the GGUF quantized models 4K\n\n\n* Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K\n\n\nCross Platform Support\n----------------------\n\n\nONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.\n\n\nOptimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. \n\nAlong with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.\n\n\nHere are some of the optimized configurations we have added:\n\n\n1. ONNX models for int4 DML: Quantized to int4 via AWQ\n2. ONNX model for fp16 CUDA\n3. ONNX model for int4 CUDA: Quantized to int4 via RTN\n4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN\n\n\nLicense\n-------\n\n\nThe model is licensed under the MIT license.\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [WesPro/PsykidelicLlama3](https://huggingface.co/WesPro/PsykidelicLlama3) + [mpasila/Llama-3-LimaRP-LoRA-8B](https://huggingface.co/mpasila/Llama-3-LimaRP-LoRA-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: WesPro/PsykidelicLlama3+mpasila/Llama-3-LimaRP-LoRA-8B
parameters:
weight: 1.0
merge_method: linear
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["WesPro/PsykidelicLlama3", "mpasila/Llama-3-LimaRP-LoRA-8B"]} | WesPro/PsyKidelic_Llama3_LimaRP | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:WesPro/PsykidelicLlama3",
"base_model:mpasila/Llama-3-LimaRP-LoRA-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T21:47:06+00:00 | [
"2203.05482"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #arxiv-2203.05482 #base_model-WesPro/PsykidelicLlama3 #base_model-mpasila/Llama-3-LimaRP-LoRA-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the linear merge method.
### Models Merged
The following models were included in the merge:
* WesPro/PsykidelicLlama3 + mpasila/Llama-3-LimaRP-LoRA-8B
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the linear merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* WesPro/PsykidelicLlama3 + mpasila/Llama-3-LimaRP-LoRA-8B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #arxiv-2203.05482 #base_model-WesPro/PsykidelicLlama3 #base_model-mpasila/Llama-3-LimaRP-LoRA-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the linear merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* WesPro/PsykidelicLlama3 + mpasila/Llama-3-LimaRP-LoRA-8B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "diffusers"} | rubbrband/asianBrmBeautyrealmix_v40 | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-23T21:48:49+00:00 | [
"1910.09700"
] | [] | TAGS
#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Resharded
Resharded version of https://huggingface.co/tiiuae/falcon-7b-instruct for low RAM enviroments (e.g. Colab, Kaggle) in safetensors
Tutorial: https://medium.com/@vilsonrodrigues/run-your-private-llm-falcon-7b-instruct-with-less-than-6gb-of-gpu-using-4-bit-quantization-ff1d4ffbabcc
---
# ✨ Falcon-7B-Instruct
**Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.**
*Paper coming soon 😊.*
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-7B-Instruct?
* **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).**
* **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
⚠️ Falcon is now available as a core model in the `transformers` library! To use the in-library version, please install the latest version of `transformers` with `pip install git+https://github.com/ huggingface/transformers.git`, then simply remove the `trust_remote_code=True` argument from `from_pretrained()`.
💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
🔥 **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct.
# Model Card for Falcon-7B-Instruct
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0;
- **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.
| **Data source** | **Fraction** | **Tokens** | **Description** |
|--------------------|--------------|------------|-----------------------------------|
| [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat |
| [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct |
| [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct |
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
## Evaluation
*Paper coming soon.*
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
Note that this model variant is not optimized for NLP benchmarks.
## Technical Specifications
For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## License
Falcon-7B-Instruct is made available under the Apache 2.0 license.
## Contact
[email protected] | {"language": ["en"], "license": "apache-2.0", "datasets": ["tiiuae/falcon-refinedweb"], "inference": true, "widget": [{"text": "Hey Falcon! Any recommendations for my holidays in Abu Dhabi?", "example_title": "Abu Dhabi Trip"}, {"text": "What's the Everett interpretation of quantum mechanics?", "example_title": "Q/A: Quantum & Answers"}, {"text": "Give me a list of the top 10 dive sites you would recommend around the world.", "example_title": "Diving Top 10"}, {"text": "Can you tell me more about deep-water soloing?", "example_title": "Extreme sports"}, {"text": "Can you write a short tweet about the Apache 2.0 release of our latest AI model, Falcon LLM?", "example_title": "Twitter Helper"}, {"text": "What are the responsabilities of a Chief Llama Officer?", "example_title": "Trendy Jobs"}]} | droneinspector/droneinspector | null | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T21:50:13+00:00 | [
"2205.14135",
"1911.02150",
"2005.14165",
"2104.09864",
"2306.01116"
] | [
"en"
] | TAGS
#transformers #safetensors #falcon #text-generation #custom_code #en #dataset-tiiuae/falcon-refinedweb #arxiv-2205.14135 #arxiv-1911.02150 #arxiv-2005.14165 #arxiv-2104.09864 #arxiv-2306.01116 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Resharded
=========
Resharded version of URL for low RAM enviroments (e.g. Colab, Kaggle) in safetensors
Tutorial: URL
---
Falcon-7B-Instruct
==================
Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by TII based on Falcon-7B and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.
*Paper coming soon .*
To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading this great blogpost fron HF!
Why use Falcon-7B-Instruct?
---------------------------
* You are looking for a ready-to-use chat/instruct model based on Falcon-7B.
* Falcon-7B is a strong base model, outperforming comparable open-source models (e.g., MPT-7B, StableLM, RedPajama etc.), thanks to being trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. See the OpenLLM Leaderboard.
* It features an architecture optimized for inference, with FlashAttention (Dao et al., 2022) and multiquery (Shazeer et al., 2019).
️ Falcon is now available as a core model in the 'transformers' library! To use the in-library version, please install the latest version of 'transformers' with 'pip install git+URL huggingface/URL', then simply remove the 'trust\_remote\_code=True' argument from 'from\_pretrained()'.
This is an instruct model, which may not be ideal for further finetuning. If you are interested in building your own instruct/chat model, we recommend starting from Falcon-7B.
Looking for an even more powerful model? Falcon-40B-Instruct is Falcon-7B-Instruct's big brother!
Falcon LLMs require PyTorch 2.0 for use with 'transformers'!
For fast inference with Falcon, check-out Text Generation Inference! Read more in this blogpost.
You will need at least 16GB of memory to swiftly run inference with Falcon-7B-Instruct.
Model Card for Falcon-7B-Instruct
=================================
Model Details
-------------
### Model Description
* Developed by: URL;
* Model type: Causal decoder-only;
* Language(s) (NLP): English and French;
* License: Apache 2.0;
* Finetuned from model: Falcon-7B.
### Model Source
* Paper: *coming soon*.
Uses
----
### Direct Use
Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
Bias, Risks, and Limitations
----------------------------
Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.
How to Get Started with the Model
---------------------------------
Training Details
----------------
### Training Data
Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.
The data was tokenized with the Falcon-7B/40B tokenizer.
Evaluation
----------
*Paper coming soon.*
See the OpenLLM Leaderboard for early results.
Note that this model variant is not optimized for NLP benchmarks.
Technical Specifications
------------------------
For more information about pretraining, see Falcon-7B.
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper (Brown et al., 2020), with the following differences:
* Positionnal embeddings: rotary (Su et al., 2021);
* Attention: multiquery (Shazeer et al., 2019) and FlashAttention (Dao et al., 2022);
* Decoder-block: parallel attention/MLP with a single layer norm.
Hyperparameter: Layers, Value: 32, Comment:
Hyperparameter: 'd\_model', Value: 4544, Comment: Increased to compensate for multiquery
Hyperparameter: 'head\_dim', Value: 64, Comment: Reduced to optimise for FlashAttention
Hyperparameter: Vocabulary, Value: 65024, Comment:
Hyperparameter: Sequence length, Value: 2048, Comment:
### Compute Infrastructure
#### Hardware
Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
*Paper coming soon* . In the meanwhile, you can use the following information to cite:
To learn more about the pretraining dataset, see the RefinedWeb paper.
License
-------
Falcon-7B-Instruct is made available under the Apache 2.0 license.
Contact
-------
falconllm@URL
| [
"### Model Description\n\n\n* Developed by: URL;\n* Model type: Causal decoder-only;\n* Language(s) (NLP): English and French;\n* License: Apache 2.0;\n* Finetuned from model: Falcon-7B.",
"### Model Source\n\n\n* Paper: *coming soon*.\n\n\nUses\n----",
"### Direct Use\n\n\nFalcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.",
"### Out-of-Scope Use\n\n\nProduction use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nFalcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.",
"### Recommendations\n\n\nWe recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.\n\n\nHow to Get Started with the Model\n---------------------------------\n\n\nTraining Details\n----------------",
"### Training Data\n\n\nFalcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.\n\n\n\nThe data was tokenized with the Falcon-7B/40B tokenizer.\n\n\nEvaluation\n----------\n\n\n*Paper coming soon.*\n\n\nSee the OpenLLM Leaderboard for early results.\n\n\nNote that this model variant is not optimized for NLP benchmarks.\n\n\nTechnical Specifications\n------------------------\n\n\nFor more information about pretraining, see Falcon-7B.",
"### Model Architecture and Objective\n\n\nFalcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).\n\n\nThe architecture is broadly adapted from the GPT-3 paper (Brown et al., 2020), with the following differences:\n\n\n* Positionnal embeddings: rotary (Su et al., 2021);\n* Attention: multiquery (Shazeer et al., 2019) and FlashAttention (Dao et al., 2022);\n* Decoder-block: parallel attention/MLP with a single layer norm.\n\n\nHyperparameter: Layers, Value: 32, Comment: \nHyperparameter: 'd\\_model', Value: 4544, Comment: Increased to compensate for multiquery\nHyperparameter: 'head\\_dim', Value: 64, Comment: Reduced to optimise for FlashAttention\nHyperparameter: Vocabulary, Value: 65024, Comment: \nHyperparameter: Sequence length, Value: 2048, Comment:",
"### Compute Infrastructure",
"#### Hardware\n\n\nFalcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.",
"#### Software\n\n\nFalcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)\n\n\n*Paper coming soon* . In the meanwhile, you can use the following information to cite:\n\n\nTo learn more about the pretraining dataset, see the RefinedWeb paper.\n\n\nLicense\n-------\n\n\nFalcon-7B-Instruct is made available under the Apache 2.0 license.\n\n\nContact\n-------\n\n\nfalconllm@URL"
] | [
"TAGS\n#transformers #safetensors #falcon #text-generation #custom_code #en #dataset-tiiuae/falcon-refinedweb #arxiv-2205.14135 #arxiv-1911.02150 #arxiv-2005.14165 #arxiv-2104.09864 #arxiv-2306.01116 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Model Description\n\n\n* Developed by: URL;\n* Model type: Causal decoder-only;\n* Language(s) (NLP): English and French;\n* License: Apache 2.0;\n* Finetuned from model: Falcon-7B.",
"### Model Source\n\n\n* Paper: *coming soon*.\n\n\nUses\n----",
"### Direct Use\n\n\nFalcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.",
"### Out-of-Scope Use\n\n\nProduction use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nFalcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.",
"### Recommendations\n\n\nWe recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.\n\n\nHow to Get Started with the Model\n---------------------------------\n\n\nTraining Details\n----------------",
"### Training Data\n\n\nFalcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.\n\n\n\nThe data was tokenized with the Falcon-7B/40B tokenizer.\n\n\nEvaluation\n----------\n\n\n*Paper coming soon.*\n\n\nSee the OpenLLM Leaderboard for early results.\n\n\nNote that this model variant is not optimized for NLP benchmarks.\n\n\nTechnical Specifications\n------------------------\n\n\nFor more information about pretraining, see Falcon-7B.",
"### Model Architecture and Objective\n\n\nFalcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).\n\n\nThe architecture is broadly adapted from the GPT-3 paper (Brown et al., 2020), with the following differences:\n\n\n* Positionnal embeddings: rotary (Su et al., 2021);\n* Attention: multiquery (Shazeer et al., 2019) and FlashAttention (Dao et al., 2022);\n* Decoder-block: parallel attention/MLP with a single layer norm.\n\n\nHyperparameter: Layers, Value: 32, Comment: \nHyperparameter: 'd\\_model', Value: 4544, Comment: Increased to compensate for multiquery\nHyperparameter: 'head\\_dim', Value: 64, Comment: Reduced to optimise for FlashAttention\nHyperparameter: Vocabulary, Value: 65024, Comment: \nHyperparameter: Sequence length, Value: 2048, Comment:",
"### Compute Infrastructure",
"#### Hardware\n\n\nFalcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.",
"#### Software\n\n\nFalcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)\n\n\n*Paper coming soon* . In the meanwhile, you can use the following information to cite:\n\n\nTo learn more about the pretraining dataset, see the RefinedWeb paper.\n\n\nLicense\n-------\n\n\nFalcon-7B-Instruct is made available under the Apache 2.0 license.\n\n\nContact\n-------\n\n\nfalconllm@URL"
] |
text-generation | transformers | base model = beomi/Llama-3-Open-Ko-8B-Instruct-preview
Dataset = hansoldeco domain own dataset
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) | {} | sosoai/hansoldeco-beomi-Llama-3-Open-Ko-8B-Instruct-preview | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T21:52:36+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| base model = beomi/Llama-3-Open-Ko-8B-Instruct-preview
Dataset = hansoldeco domain own dataset
<img src="URL alt="Built with Axolotl" width="200" height="32"/> | [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_4ep
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.1272
- eval_runtime: 2.8602
- eval_samples_per_second: 69.924
- eval_steps_per_second: 8.741
- epoch: 3.9936
- step: 312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1 | {"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_4ep", "results": []}]} | mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_4ep | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | 2024-04-23T21:54:27+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us
|
# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_4ep
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.1272
- eval_runtime: 2.8602
- eval_samples_per_second: 69.924
- eval_steps_per_second: 8.741
- epoch: 3.9936
- step: 312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1 | [
"# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_4ep\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 2.1272\n- eval_runtime: 2.8602\n- eval_samples_per_second: 69.924\n- eval_steps_per_second: 8.741\n- epoch: 3.9936\n- step: 312",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us \n",
"# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_4ep\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 2.1272\n- eval_runtime: 2.8602\n- eval_samples_per_second: 69.924\n- eval_steps_per_second: 8.741\n- epoch: 3.9936\n- step: 312",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.19.1"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"} | ahajahmed/Enlighten_Instruct | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-04-23T21:56:17+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [ResplendentAI/Kei_Llama3_8B](https://huggingface.co/ResplendentAI/Kei_Llama3_8B) as a base.
### Models Merged
The following models were included in the merge:
* [cgato/L3-TheSpice-8b-v0.1.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.1.3)
* [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: cgato/L3-TheSpice-8b-v0.1.3
- model: Sao10K/L3-Solana-8B-v1
- model: ResplendentAI/Kei_Llama3_8B
merge_method: model_stock
base_model: ResplendentAI/Kei_Llama3_8B
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["cgato/L3-TheSpice-8b-v0.1.3", "Sao10K/L3-Solana-8B-v1", "ResplendentAI/Kei_Llama3_8B"]} | jeiku/Average_Normie_l3_v0_8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:cgato/L3-TheSpice-8b-v0.1.3",
"base_model:Sao10K/L3-Solana-8B-v1",
"base_model:ResplendentAI/Kei_Llama3_8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T21:56:31+00:00 | [
"2403.19522"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #arxiv-2403.19522 #base_model-cgato/L3-TheSpice-8b-v0.1.3 #base_model-Sao10K/L3-Solana-8B-v1 #base_model-ResplendentAI/Kei_Llama3_8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the Model Stock merge method using ResplendentAI/Kei_Llama3_8B as a base.
### Models Merged
The following models were included in the merge:
* cgato/L3-TheSpice-8b-v0.1.3
* Sao10K/L3-Solana-8B-v1
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the Model Stock merge method using ResplendentAI/Kei_Llama3_8B as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* cgato/L3-TheSpice-8b-v0.1.3\n* Sao10K/L3-Solana-8B-v1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #arxiv-2403.19522 #base_model-cgato/L3-TheSpice-8b-v0.1.3 #base_model-Sao10K/L3-Solana-8B-v1 #base_model-ResplendentAI/Kei_Llama3_8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the Model Stock merge method using ResplendentAI/Kei_Llama3_8B as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* cgato/L3-TheSpice-8b-v0.1.3\n* Sao10K/L3-Solana-8B-v1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
Model is generated using STF + DPO on Mistral-7B as base model.
## Training Details
Mistral-7B was finetuned using SFT on a golf data that is in ChatML format.
Fine-tuned model was trained using DPO algorithm using Intel/orca_dpo_pairs in ChatML format.
### Training Procedure
Both trainings were performed using PEFT.
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
SFT parameters:
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=55,
save_strategy="no",
logging_steps=5,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=30,
fp16=True,
DPO parameters
beta=0.1,
loss_type= 'sigmoid' ,
max_prompt_length=512,
max_length=1024,
## Model Card Contact
https://huggingface.co/berkouille | {"library_name": "transformers", "tags": []} | berkouille/assistant_DPO_92 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T21:56:51+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
Model is generated using STF + DPO on Mistral-7B as base model.
## Training Details
Mistral-7B was finetuned using SFT on a golf data that is in ChatML format.
Fine-tuned model was trained using DPO algorithm using Intel/orca_dpo_pairs in ChatML format.
### Training Procedure
Both trainings were performed using PEFT.
#### Preprocessing [optional]
#### Training Hyperparameters
SFT parameters:
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=55,
save_strategy="no",
logging_steps=5,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=30,
fp16=True,
DPO parameters
beta=0.1,
loss_type= 'sigmoid' ,
max_prompt_length=512,
max_length=1024,
## Model Card Contact
URL | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\nModel is generated using STF + DPO on Mistral-7B as base model.",
"## Training Details\n\nMistral-7B was finetuned using SFT on a golf data that is in ChatML format.\n\nFine-tuned model was trained using DPO algorithm using Intel/orca_dpo_pairs in ChatML format.",
"### Training Procedure\n\nBoth trainings were performed using PEFT.",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\nSFT parameters:\n per_device_train_batch_size=1,\n gradient_accumulation_steps=4,\n gradient_checkpointing=True,\n learning_rate=5e-5, \n lr_scheduler_type=\"cosine\",\n max_steps=55,\n save_strategy=\"no\",\n logging_steps=5,\n output_dir=new_model,\n optim=\"paged_adamw_32bit\",\n warmup_steps=30, \n fp16=True,\n\nDPO parameters\n beta=0.1,\n loss_type= 'sigmoid' , \n max_prompt_length=512,\n max_length=1024,",
"## Model Card Contact\n\nURL"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\nModel is generated using STF + DPO on Mistral-7B as base model.",
"## Training Details\n\nMistral-7B was finetuned using SFT on a golf data that is in ChatML format.\n\nFine-tuned model was trained using DPO algorithm using Intel/orca_dpo_pairs in ChatML format.",
"### Training Procedure\n\nBoth trainings were performed using PEFT.",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\nSFT parameters:\n per_device_train_batch_size=1,\n gradient_accumulation_steps=4,\n gradient_checkpointing=True,\n learning_rate=5e-5, \n lr_scheduler_type=\"cosine\",\n max_steps=55,\n save_strategy=\"no\",\n logging_steps=5,\n output_dir=new_model,\n optim=\"paged_adamw_32bit\",\n warmup_steps=30, \n fp16=True,\n\nDPO parameters\n beta=0.1,\n loss_type= 'sigmoid' , \n max_prompt_length=512,\n max_length=1024,",
"## Model Card Contact\n\nURL"
] |
null | null |
[GGUF of https://huggingface.co/Orenguteng/Lexi-Llama-3-8B-Uncensored](https://huggingface.co/Orenguteng/Lexi-Llama-3-8B-Uncensored)

This model is based on Llama-3-8b-Instruct, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
Lexi is uncensored, which makes the model compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones.
You are responsible for any content you create using this model. Please use it responsibly.
Lexi is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license.
| {"license": "other", "license_name": "license", "license_link": "https://huggingface.co/Orenguteng/Lexi-Llama-3-8B-Uncensored"} | Orenguteng/Llama-3-8B-Lexi-Uncensored-GGUF | null | [
"gguf",
"license:other",
"region:us"
] | null | 2024-04-23T21:57:52+00:00 | [] | [] | TAGS
#gguf #license-other #region-us
|
GGUF of URL
!image/png
This model is based on Llama-3-8b-Instruct, and is governed by META LLAMA 3 COMMUNITY LICENSE AGREEMENT
Lexi is uncensored, which makes the model compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones.
You are responsible for any content you create using this model. Please use it responsibly.
Lexi is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license.
| [] | [
"TAGS\n#gguf #license-other #region-us \n"
] |
image-segmentation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4429
- Mean Iou: 0.0127
- Mean Accuracy: 0.0289
- Overall Accuracy: 0.2813
- Accuracy Unlabeled: nan
- Accuracy Flat-road: 0.0012
- Accuracy Flat-sidewalk: 0.7342
- Accuracy Flat-crosswalk: 0.0
- Accuracy Flat-cyclinglane: 0.0
- Accuracy Flat-parkingdriveway: 0.0
- Accuracy Flat-railtrack: 0.0
- Accuracy Flat-curb: 0.0
- Accuracy Human-person: 0.0
- Accuracy Human-rider: 0.0
- Accuracy Vehicle-car: 0.0
- Accuracy Vehicle-truck: 0.0
- Accuracy Vehicle-bus: 0.0
- Accuracy Vehicle-tramtrain: 0.0
- Accuracy Vehicle-motorcycle: 0.0
- Accuracy Vehicle-bicycle: 0.0
- Accuracy Vehicle-caravan: 0.0
- Accuracy Vehicle-cartrailer: 0.0
- Accuracy Construction-building: 0.0538
- Accuracy Construction-door: 0.0
- Accuracy Construction-wall: 0.0
- Accuracy Construction-fenceguardrail: 0.0
- Accuracy Construction-bridge: 0.0
- Accuracy Construction-tunnel: 0.0
- Accuracy Construction-stairs: 0.0
- Accuracy Object-pole: 0.0
- Accuracy Object-trafficsign: 0.0
- Accuracy Object-trafficlight: 0.0
- Accuracy Nature-vegetation: 0.1770
- Accuracy Nature-terrain: 0.0
- Accuracy Sky: 0.0149
- Accuracy Void-ground: 0.0
- Accuracy Void-dynamic: 0.0
- Accuracy Void-static: 0.0
- Accuracy Void-unclear: 0.0
- Iou Unlabeled: nan
- Iou Flat-road: 0.0012
- Iou Flat-sidewalk: 0.3016
- Iou Flat-crosswalk: 0.0
- Iou Flat-cyclinglane: 0.0
- Iou Flat-parkingdriveway: 0.0
- Iou Flat-railtrack: 0.0
- Iou Flat-curb: 0.0
- Iou Human-person: 0.0
- Iou Human-rider: 0.0
- Iou Vehicle-car: 0.0
- Iou Vehicle-truck: 0.0
- Iou Vehicle-bus: 0.0
- Iou Vehicle-tramtrain: 0.0
- Iou Vehicle-motorcycle: 0.0
- Iou Vehicle-bicycle: 0.0
- Iou Vehicle-caravan: 0.0
- Iou Vehicle-cartrailer: 0.0
- Iou Construction-building: 0.0318
- Iou Construction-door: 0.0
- Iou Construction-wall: 0.0
- Iou Construction-fenceguardrail: 0.0
- Iou Construction-bridge: 0.0
- Iou Construction-tunnel: 0.0
- Iou Construction-stairs: 0.0
- Iou Object-pole: 0.0
- Iou Object-trafficsign: 0.0
- Iou Object-trafficlight: 0.0
- Iou Nature-vegetation: 0.0859
- Iou Nature-terrain: 0.0
- Iou Sky: 0.0108
- Iou Void-ground: 0.0
- Iou Void-dynamic: 0.0
- Iou Void-static: 0.0
- Iou Void-unclear: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Flat-road | Accuracy Flat-sidewalk | Accuracy Flat-crosswalk | Accuracy Flat-cyclinglane | Accuracy Flat-parkingdriveway | Accuracy Flat-railtrack | Accuracy Flat-curb | Accuracy Human-person | Accuracy Human-rider | Accuracy Vehicle-car | Accuracy Vehicle-truck | Accuracy Vehicle-bus | Accuracy Vehicle-tramtrain | Accuracy Vehicle-motorcycle | Accuracy Vehicle-bicycle | Accuracy Vehicle-caravan | Accuracy Vehicle-cartrailer | Accuracy Construction-building | Accuracy Construction-door | Accuracy Construction-wall | Accuracy Construction-fenceguardrail | Accuracy Construction-bridge | Accuracy Construction-tunnel | Accuracy Construction-stairs | Accuracy Object-pole | Accuracy Object-trafficsign | Accuracy Object-trafficlight | Accuracy Nature-vegetation | Accuracy Nature-terrain | Accuracy Sky | Accuracy Void-ground | Accuracy Void-dynamic | Accuracy Void-static | Accuracy Void-unclear | Iou Unlabeled | Iou Flat-road | Iou Flat-sidewalk | Iou Flat-crosswalk | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-curb | Iou Human-person | Iou Human-rider | Iou Vehicle-car | Iou Vehicle-truck | Iou Vehicle-bus | Iou Vehicle-tramtrain | Iou Vehicle-motorcycle | Iou Vehicle-bicycle | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Construction-building | Iou Construction-door | Iou Construction-wall | Iou Construction-fenceguardrail | Iou Construction-bridge | Iou Construction-tunnel | Iou Construction-stairs | Iou Object-pole | Iou Object-trafficsign | Iou Object-trafficlight | Iou Nature-vegetation | Iou Nature-terrain | Iou Sky | Iou Void-ground | Iou Void-dynamic | Iou Void-static | Iou Void-unclear |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:------------------:|:----------------------:|:-----------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:---------------------:|:--------------------:|:--------------------:|:----------------------:|:--------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------:|:---------------------------:|:------------------------------:|:--------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:--------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------:|:--------------------:|:---------------------:|:--------------------:|:---------------------:|:-------------:|:-------------:|:-----------------:|:------------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:----------------:|:---------------:|:---------------:|:-----------------:|:---------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------:|:----------------------:|:-------------------------:|:---------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:---------------:|:----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------:|:---------------:|:----------------:|:---------------:|:----------------:|
| 3.5256 | 0.2 | 10 | 3.5147 | 0.0071 | 0.0401 | 0.1017 | nan | 0.0000 | 0.2861 | 0.0000 | 0.0000 | 0.0402 | 0.0 | 0.0011 | 0.0017 | 0.0 | 0.0035 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0215 | 0.0236 | 0.0002 | 0.0010 | 0.0 | 0.0053 | 0.0162 | 0.0020 | 0.5432 | 0.0000 | 0.0815 | 0.0166 | 0.0172 | 0.0010 | 0.0000 | 0.0028 | 0.2620 | 0.0002 | 0.0060 | 0.0294 | 0.0 | 0.0000 | 0.1889 | 0.0000 | 0.0000 | 0.0173 | 0.0 | 0.0011 | 0.0005 | 0.0 | 0.0029 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0026 | 0.0001 | 0.0000 | 0.0010 | 0.0 | 0.0038 | 0.0064 | 0.0001 | 0.0000 | 0.0000 | 0.0077 | 0.0010 | 0.0000 | 0.0010 | 0.0000 | 0.0027 | 0.0051 | 0.0002 | 0.0049 | 0.0002 |
| 3.3115 | 0.4 | 20 | 3.4349 | 0.0090 | 0.0293 | 0.1597 | nan | 0.0001 | 0.4536 | 0.0 | 0.0 | 0.0642 | 0.0 | 0.0002 | 0.0009 | 0.0 | 0.0008 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0075 | 0.0213 | 0.0 | 0.0009 | 0.0 | 0.0059 | 0.0046 | 0.0 | 0.0782 | 0.0 | 0.1300 | 0.0102 | 0.0231 | 0.0046 | 0.0000 | 0.0016 | 0.1731 | 0.0 | 0.0044 | 0.0114 | nan | 0.0001 | 0.2507 | 0.0 | 0.0 | 0.0202 | 0.0 | 0.0002 | 0.0004 | 0.0 | 0.0008 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0021 | 0.0001 | 0.0 | 0.0009 | 0.0 | 0.0038 | 0.0030 | 0.0 | 0.0000 | 0.0 | 0.0080 | 0.0012 | 0.0000 | 0.0044 | 0.0000 | 0.0016 | 0.0050 | 0.0 | 0.0038 | 0.0002 |
| 2.8003 | 0.6 | 30 | 3.3730 | 0.0087 | 0.0281 | 0.1245 | nan | 0.0054 | 0.3314 | 0.0000 | 0.0000 | 0.1290 | 0.0 | 0.0003 | 0.0031 | 0.0 | 0.0004 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0023 | 0.0092 | 0.0 | 0.0011 | 0.0 | 0.0233 | 0.0022 | 0.0 | 0.0 | 0.0 | 0.1795 | 0.0126 | 0.0010 | 0.0239 | 0.0002 | 0.0046 | 0.2127 | 0.0 | 0.0075 | 0.0060 | nan | 0.0052 | 0.2092 | 0.0000 | 0.0000 | 0.0244 | 0.0 | 0.0003 | 0.0009 | 0.0 | 0.0004 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0011 | 0.0001 | 0.0 | 0.0011 | 0.0 | 0.0078 | 0.0017 | 0.0 | 0.0 | 0.0 | 0.0079 | 0.0013 | 0.0000 | 0.0203 | 0.0002 | 0.0041 | 0.0050 | 0.0 | 0.0054 | 0.0004 |
| 3.2521 | 0.8 | 40 | 3.2736 | 0.0110 | 0.0292 | 0.1863 | nan | 0.0294 | 0.5083 | 0.0001 | 0.0001 | 0.1290 | 0.0 | 0.0004 | 0.0012 | 0.0 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0015 | 0.0016 | 0.0 | 0.0016 | 0.0 | 0.0127 | 0.0038 | 0.0 | 0.0 | 0.0000 | 0.1112 | 0.0033 | 0.0 | 0.0218 | 0.0002 | 0.0042 | 0.1371 | 0.0 | 0.0163 | 0.0076 | nan | 0.0243 | 0.2651 | 0.0001 | 0.0001 | 0.0253 | 0.0 | 0.0004 | 0.0005 | 0.0 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0009 | 0.0001 | 0.0 | 0.0016 | 0.0 | 0.0058 | 0.0028 | 0.0 | 0.0 | 0.0000 | 0.0080 | 0.0011 | 0.0 | 0.0185 | 0.0002 | 0.0037 | 0.0049 | 0.0 | 0.0083 | 0.0011 |
| 2.9043 | 1.0 | 50 | 3.2220 | 0.0132 | 0.0291 | 0.1739 | nan | 0.1252 | 0.3934 | 0.0003 | 0.0003 | 0.1066 | 0.0 | 0.0063 | 0.0008 | 0.0 | 0.0068 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.0075 | 0.0 | 0.0098 | 0.0044 | 0.0 | 0.0 | 0.0 | 0.0582 | 0.0006 | 0.0 | 0.1309 | 0.0006 | 0.0081 | 0.1208 | 0.0 | 0.0094 | 0.0001 | nan | 0.0664 | 0.2317 | 0.0003 | 0.0003 | 0.0241 | 0.0 | 0.0053 | 0.0004 | 0.0 | 0.0057 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0005 | 0.0 | 0.0 | 0.0068 | 0.0 | 0.0052 | 0.0033 | 0.0 | 0.0 | 0.0 | 0.0085 | 0.0004 | 0.0 | 0.0710 | 0.0006 | 0.0064 | 0.0046 | 0.0 | 0.0061 | 0.0000 |
| 2.8893 | 1.2 | 60 | 3.1323 | 0.0128 | 0.0301 | 0.1824 | nan | 0.1147 | 0.2779 | 0.0000 | 0.0002 | 0.0638 | 0.0 | 0.0002 | 0.0001 | 0.0 | 0.0066 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0111 | 0.0 | 0.0017 | 0.0009 | 0.0 | 0.0 | 0.0 | 0.0052 | 0.0000 | 0.0 | 0.4865 | 0.0005 | 0.0062 | 0.0445 | 0.0 | 0.0025 | 0.0 | nan | 0.0637 | 0.1900 | 0.0000 | 0.0002 | 0.0202 | 0.0 | 0.0002 | 0.0001 | 0.0 | 0.0052 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0091 | 0.0 | 0.0014 | 0.0009 | 0.0 | 0.0 | 0.0 | 0.0033 | 0.0000 | 0.0 | 0.1301 | 0.0005 | 0.0050 | 0.0041 | 0.0 | 0.0022 | 0.0 |
| 2.8221 | 1.4 | 70 | 3.0049 | 0.0138 | 0.0298 | 0.2481 | nan | 0.0664 | 0.5578 | 0.0 | 0.0001 | 0.0184 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0015 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0101 | 0.0 | 0.0002 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.3322 | 0.0001 | 0.0147 | 0.0097 | 0.0 | 0.0000 | 0.0 | nan | 0.0443 | 0.2727 | 0.0 | 0.0001 | 0.0109 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0014 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0084 | 0.0 | 0.0002 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.1168 | 0.0001 | 0.0102 | 0.0032 | 0.0 | 0.0000 | 0.0 |
| 2.7321 | 1.6 | 80 | 2.9281 | 0.0129 | 0.0300 | 0.2121 | nan | 0.1000 | 0.3599 | 0.0 | 0.0 | 0.0076 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0022 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0172 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.5224 | 0.0 | 0.0077 | 0.0040 | 0.0 | 0.0 | 0.0 | nan | 0.0577 | 0.2179 | 0.0 | 0.0 | 0.0056 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0019 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0126 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.1335 | 0.0 | 0.0065 | 0.0028 | 0.0 | 0.0 | 0.0 |
| 2.7583 | 1.8 | 90 | 2.9182 | 0.0107 | 0.0303 | 0.1746 | nan | 0.1465 | 0.1641 | 0.0 | 0.0 | 0.0036 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0006 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0123 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.6905 | 0.0 | 0.0102 | 0.0008 | 0.0 | 0.0 | 0.0 | nan | 0.0714 | 0.1297 | 0.0 | 0.0 | 0.0030 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0006 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0095 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.1398 | 0.0 | 0.0084 | 0.0007 | 0.0 | 0.0 | 0.0 |
| 3.1177 | 2.0 | 100 | 2.9230 | 0.0138 | 0.0297 | 0.2272 | nan | 0.1294 | 0.4556 | 0.0 | 0.0000 | 0.0030 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0009 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0299 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3671 | 0.0 | 0.0219 | 0.0004 | 0.0 | 0.0 | 0.0 | nan | 0.0662 | 0.2463 | 0.0 | 0.0000 | 0.0026 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0009 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0196 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1193 | 0.0 | 0.0149 | 0.0004 | 0.0 | 0.0 | 0.0 |
| 3.041 | 2.2 | 110 | 2.8124 | 0.0138 | 0.0291 | 0.2549 | nan | 0.1402 | 0.6049 | 0.0 | 0.0000 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0005 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0363 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1992 | 0.0 | 0.0075 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0683 | 0.2797 | 0.0 | 0.0000 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0005 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0234 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0921 | 0.0 | 0.0062 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1549 | 2.4 | 120 | 2.7993 | 0.0132 | 0.0292 | 0.2105 | nan | 0.1463 | 0.3812 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0572 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4022 | 0.0 | 0.0061 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0692 | 0.2227 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0301 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1223 | 0.0 | 0.0053 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.7506 | 2.6 | 130 | 2.7869 | 0.0136 | 0.0290 | 0.2153 | nan | 0.1198 | 0.4194 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0626 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3578 | 0.0 | 0.0272 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0628 | 0.2315 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0319 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1161 | 0.0 | 0.0191 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.8666 | 2.8 | 140 | 2.7030 | 0.0133 | 0.0288 | 0.2546 | nan | 0.0626 | 0.5989 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0378 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2736 | 0.0 | 0.0047 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0417 | 0.2753 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0239 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1059 | 0.0 | 0.0041 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.3693 | 3.0 | 150 | 2.6758 | 0.0133 | 0.0289 | 0.2790 | nan | 0.0661 | 0.7211 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0304 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1548 | 0.0 | 0.0089 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0432 | 0.3002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0211 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0808 | 0.0 | 0.0071 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.4211 | 3.2 | 160 | 2.6509 | 0.0122 | 0.0292 | 0.3118 | nan | 0.0340 | 0.8762 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0157 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0493 | 0.0 | 0.0169 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0270 | 0.3255 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0129 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0375 | 0.0 | 0.0117 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2934 | 3.4 | 170 | 2.5811 | 0.0109 | 0.0290 | 0.3268 | nan | 0.0162 | 0.9439 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0104 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0148 | 0.0 | 0.0021 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0145 | 0.3322 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0093 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0131 | 0.0 | 0.0019 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2474 | 3.6 | 180 | 2.6740 | 0.0122 | 0.0287 | 0.3000 | nan | 0.0201 | 0.8363 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0013 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0619 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0461 | 0.0 | 0.0089 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0170 | 0.3185 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0013 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0342 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0354 | 0.0 | 0.0072 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.4543 | 3.8 | 190 | 2.5741 | 0.0115 | 0.0287 | 0.3111 | nan | 0.0113 | 0.8837 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0006 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0529 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0263 | 0.0 | 0.0014 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0103 | 0.3246 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0006 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0317 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0219 | 0.0 | 0.0014 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.6415 | 4.0 | 200 | 2.4955 | 0.0114 | 0.0287 | 0.3121 | nan | 0.0075 | 0.8862 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0495 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0328 | 0.0 | 0.0005 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0070 | 0.3248 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0302 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0266 | 0.0 | 0.0005 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.3359 | 4.2 | 210 | 2.6535 | 0.0130 | 0.0280 | 0.2474 | nan | 0.0389 | 0.6235 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1633 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1048 | 0.0 | 0.0211 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0285 | 0.2807 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0537 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0632 | 0.0 | 0.0142 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.8133 | 4.4 | 220 | 2.6000 | 0.0133 | 0.0285 | 0.2609 | nan | 0.0401 | 0.6643 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1069 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1350 | 0.0 | 0.0210 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0291 | 0.2901 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0447 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0749 | 0.0 | 0.0147 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.3126 | 4.6 | 230 | 2.6429 | 0.0126 | 0.0288 | 0.1857 | nan | 0.0332 | 0.3374 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2814 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2779 | 0.0 | 0.0480 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0255 | 0.2045 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0639 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1067 | 0.0 | 0.0288 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2695 | 4.8 | 240 | 2.5140 | 0.0128 | 0.0282 | 0.2399 | nan | 0.0217 | 0.5869 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2003 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1312 | 0.0 | 0.0183 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0177 | 0.2729 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0574 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0738 | 0.0 | 0.0145 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.0622 | 5.0 | 250 | 2.4634 | 0.0126 | 0.0283 | 0.2656 | nan | 0.0107 | 0.6862 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1332 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1278 | 0.0 | 0.0058 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0097 | 0.2918 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0491 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0736 | 0.0 | 0.0052 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1988 | 5.2 | 260 | 2.5162 | 0.0125 | 0.0282 | 0.2209 | nan | 0.0083 | 0.5152 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2606 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1382 | 0.0 | 0.0379 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0077 | 0.2553 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0621 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0760 | 0.0 | 0.0236 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.4214 | 5.4 | 270 | 2.5880 | 0.0122 | 0.0284 | 0.1888 | nan | 0.0134 | 0.3772 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3344 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1887 | 0.0 | 0.0516 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0117 | 0.2176 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0671 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0892 | 0.0 | 0.0279 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2255 | 5.6 | 280 | 2.4963 | 0.0127 | 0.0287 | 0.2732 | nan | 0.0126 | 0.7299 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1341 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0689 | 0.0 | 0.0301 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0113 | 0.3024 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0513 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0476 | 0.0 | 0.0182 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.3459 | 5.8 | 290 | 2.5055 | 0.0131 | 0.0288 | 0.2638 | nan | 0.0133 | 0.6801 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1239 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1258 | 0.0 | 0.0347 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0118 | 0.2933 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0489 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0718 | 0.0 | 0.0198 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1034 | 6.0 | 300 | 2.4549 | 0.0125 | 0.0288 | 0.2873 | nan | 0.0048 | 0.7776 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0929 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0897 | 0.0 | 0.0143 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0046 | 0.3101 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0430 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0581 | 0.0 | 0.0107 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2193 | 6.2 | 310 | 2.4227 | 0.0126 | 0.0290 | 0.2879 | nan | 0.0013 | 0.7619 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0746 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1482 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0013 | 0.3070 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0379 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0810 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.3808 | 6.4 | 320 | 2.4239 | 0.0124 | 0.0290 | 0.2926 | nan | 0.0006 | 0.7900 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0797 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1109 | 0.0 | 0.0031 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0006 | 0.3122 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0397 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0675 | 0.0 | 0.0028 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1201 | 6.6 | 330 | 2.4546 | 0.0130 | 0.0292 | 0.2795 | nan | 0.0010 | 0.7295 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0903 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1522 | 0.0 | 0.0186 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0010 | 0.3036 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0422 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0817 | 0.0 | 0.0131 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1429 | 6.8 | 340 | 2.4390 | 0.0121 | 0.0292 | 0.3077 | nan | 0.0004 | 0.8612 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0502 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0618 | 0.0 | 0.0185 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0004 | 0.3245 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0314 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0446 | 0.0 | 0.0122 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.3745 | 7.0 | 350 | 2.4814 | 0.0132 | 0.0292 | 0.2555 | nan | 0.0020 | 0.6392 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0911 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1816 | 0.0 | 0.0800 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0020 | 0.2865 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0422 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0881 | 0.0 | 0.0287 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1907 | 7.2 | 360 | 2.4901 | 0.0130 | 0.0290 | 0.2387 | nan | 0.0014 | 0.5526 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1063 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2669 | 0.0 | 0.0588 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0014 | 0.2661 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0432 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1055 | 0.0 | 0.0274 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1116 | 7.4 | 370 | 2.4841 | 0.0130 | 0.0290 | 0.2350 | nan | 0.0015 | 0.5323 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0908 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2968 | 0.0 | 0.0659 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0014 | 0.2612 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0397 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1097 | 0.0 | 0.0284 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.4808 | 7.6 | 380 | 2.4890 | 0.0129 | 0.0293 | 0.2376 | nan | 0.0025 | 0.5715 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0758 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2136 | 0.0 | 0.1314 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0024 | 0.2729 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0372 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0958 | 0.0 | 0.0319 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.8601 | 7.8 | 390 | 2.5003 | 0.0128 | 0.0290 | 0.2250 | nan | 0.0022 | 0.4998 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0898 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2944 | 0.0 | 0.1015 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0022 | 0.2538 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0393 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1094 | 0.0 | 0.0313 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.032 | 8.0 | 400 | 2.5240 | 0.0125 | 0.0289 | 0.2093 | nan | 0.0027 | 0.4406 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1033 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3108 | 0.0 | 0.1262 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0026 | 0.2379 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0422 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1106 | 0.0 | 0.0326 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.9364 | 8.2 | 410 | 2.4666 | 0.0127 | 0.0292 | 0.2720 | nan | 0.0024 | 0.7293 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0661 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0924 | 0.0 | 0.1028 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0023 | 0.3046 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0371 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0590 | 0.0 | 0.0282 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.0335 | 8.4 | 420 | 2.4894 | 0.0129 | 0.0292 | 0.2402 | nan | 0.0046 | 0.5965 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0783 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1650 | 0.0 | 0.1478 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0044 | 0.2787 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0400 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0831 | 0.0 | 0.0315 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.0622 | 8.6 | 430 | 2.5457 | 0.0121 | 0.0287 | 0.1888 | nan | 0.0038 | 0.3645 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1536 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3132 | 0.0 | 0.1396 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0037 | 0.2129 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0500 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1108 | 0.0 | 0.0338 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.9635 | 8.8 | 440 | 2.5416 | 0.0120 | 0.0287 | 0.1908 | nan | 0.0028 | 0.4200 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1216 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1900 | 0.0 | 0.2427 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0027 | 0.2335 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0465 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0899 | 0.0 | 0.0340 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.9328 | 9.0 | 450 | 2.4707 | 0.0128 | 0.0293 | 0.2609 | nan | 0.0024 | 0.6792 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0528 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1358 | 0.0 | 0.1274 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0024 | 0.2958 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0327 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0750 | 0.0 | 0.0299 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.0373 | 9.2 | 460 | 2.5003 | 0.0128 | 0.0292 | 0.2294 | nan | 0.0028 | 0.5341 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0638 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2482 | 0.0 | 0.1447 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0028 | 0.2641 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0349 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1011 | 0.0 | 0.0315 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2552 | 9.4 | 470 | 2.4884 | 0.0130 | 0.0292 | 0.2400 | nan | 0.0020 | 0.5674 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0689 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2513 | 0.0 | 0.1038 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0020 | 0.2712 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0365 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1016 | 0.0 | 0.0296 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.956 | 9.6 | 480 | 2.5214 | 0.0126 | 0.0289 | 0.2153 | nan | 0.0034 | 0.4825 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1038 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2477 | 0.0 | 0.1458 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0033 | 0.2501 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0445 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1003 | 0.0 | 0.0320 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1743 | 9.8 | 490 | 2.4624 | 0.0127 | 0.0289 | 0.2689 | nan | 0.0018 | 0.7146 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0769 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1041 | 0.0 | 0.0848 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0018 | 0.3001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0402 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0633 | 0.0 | 0.0261 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.0282 | 10.0 | 500 | 2.4429 | 0.0127 | 0.0289 | 0.2813 | nan | 0.0012 | 0.7342 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0538 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1770 | 0.0 | 0.0149 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0012 | 0.3016 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0318 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0859 | 0.0 | 0.0108 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "other", "tags": ["vision", "image-segmentation", "generated_from_trainer"], "base_model": "nvidia/mit-b0", "model-index": [{"name": "segformer-b0-finetuned-segments-sidewalk-2", "results": []}]} | karthik540/segformer-b0-finetuned-segments-sidewalk-2 | null | [
"transformers",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T21:58:04+00:00 | [] | [] | TAGS
#transformers #safetensors #segformer #vision #image-segmentation #generated_from_trainer #base_model-nvidia/mit-b0 #license-other #endpoints_compatible #region-us
| segformer-b0-finetuned-segments-sidewalk-2
==========================================
This model is a fine-tuned version of nvidia/mit-b0 on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
* Loss: 2.4429
* Mean Iou: 0.0127
* Mean Accuracy: 0.0289
* Overall Accuracy: 0.2813
* Accuracy Unlabeled: nan
* Accuracy Flat-road: 0.0012
* Accuracy Flat-sidewalk: 0.7342
* Accuracy Flat-crosswalk: 0.0
* Accuracy Flat-cyclinglane: 0.0
* Accuracy Flat-parkingdriveway: 0.0
* Accuracy Flat-railtrack: 0.0
* Accuracy Flat-curb: 0.0
* Accuracy Human-person: 0.0
* Accuracy Human-rider: 0.0
* Accuracy Vehicle-car: 0.0
* Accuracy Vehicle-truck: 0.0
* Accuracy Vehicle-bus: 0.0
* Accuracy Vehicle-tramtrain: 0.0
* Accuracy Vehicle-motorcycle: 0.0
* Accuracy Vehicle-bicycle: 0.0
* Accuracy Vehicle-caravan: 0.0
* Accuracy Vehicle-cartrailer: 0.0
* Accuracy Construction-building: 0.0538
* Accuracy Construction-door: 0.0
* Accuracy Construction-wall: 0.0
* Accuracy Construction-fenceguardrail: 0.0
* Accuracy Construction-bridge: 0.0
* Accuracy Construction-tunnel: 0.0
* Accuracy Construction-stairs: 0.0
* Accuracy Object-pole: 0.0
* Accuracy Object-trafficsign: 0.0
* Accuracy Object-trafficlight: 0.0
* Accuracy Nature-vegetation: 0.1770
* Accuracy Nature-terrain: 0.0
* Accuracy Sky: 0.0149
* Accuracy Void-ground: 0.0
* Accuracy Void-dynamic: 0.0
* Accuracy Void-static: 0.0
* Accuracy Void-unclear: 0.0
* Iou Unlabeled: nan
* Iou Flat-road: 0.0012
* Iou Flat-sidewalk: 0.3016
* Iou Flat-crosswalk: 0.0
* Iou Flat-cyclinglane: 0.0
* Iou Flat-parkingdriveway: 0.0
* Iou Flat-railtrack: 0.0
* Iou Flat-curb: 0.0
* Iou Human-person: 0.0
* Iou Human-rider: 0.0
* Iou Vehicle-car: 0.0
* Iou Vehicle-truck: 0.0
* Iou Vehicle-bus: 0.0
* Iou Vehicle-tramtrain: 0.0
* Iou Vehicle-motorcycle: 0.0
* Iou Vehicle-bicycle: 0.0
* Iou Vehicle-caravan: 0.0
* Iou Vehicle-cartrailer: 0.0
* Iou Construction-building: 0.0318
* Iou Construction-door: 0.0
* Iou Construction-wall: 0.0
* Iou Construction-fenceguardrail: 0.0
* Iou Construction-bridge: 0.0
* Iou Construction-tunnel: 0.0
* Iou Construction-stairs: 0.0
* Iou Object-pole: 0.0
* Iou Object-trafficsign: 0.0
* Iou Object-trafficlight: 0.0
* Iou Nature-vegetation: 0.0859
* Iou Nature-terrain: 0.0
* Iou Sky: 0.0108
* Iou Void-ground: 0.0
* Iou Void-dynamic: 0.0
* Iou Void-static: 0.0
* Iou Void-unclear: 0.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 6e-05
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #segformer #vision #image-segmentation #generated_from_trainer #base_model-nvidia/mit-b0 #license-other #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | null |
## Usage
Package installation
```
pip install llama-cpp-python "huggingface_hub[cli]"
```
Download the model:
```
huggingface-cli download sourabhdattawad/meta-llama-3-8b-instruct-gguf meta-llama-3-8b-instruct.Q8_0.gguf --local-dir . --local-dir-use-symlinks False
```
```Python
from llama_cpp import Llama
llm = Llama(
model_path="meta-llama-3-8b-instruct.Q8_0.gguf",
# n_gpu_layers=-1, # Uncomment to use GPU acceleration
# seed=1337, # Uncomment to set a specific seed
# n_ctx=2048, # Uncomment to increase the context window
)
output = llm(
"Q: Name the planets in the solar system? A: ", # Prompt
max_tokens=50, # Generate up to 50 tokens, set to None to generate up to the end of the context window
stop=["Q:", "\n"], # Stop generating just before the model would generate a new question
echo=True # Echo the prompt back in the output
)
output
```
```
Llama.generate: prefix-match hit
llama_print_timings: load time = 7770.49 ms
llama_print_timings: sample time = 100.16 ms / 40 runs ( 2.50 ms per token, 399.35 tokens per second)
llama_print_timings: prompt eval time = 0.00 ms / 1 tokens ( 0.00 ms per token, inf tokens per second)
llama_print_timings: eval time = 35214.73 ms / 40 runs ( 880.37 ms per token, 1.14 tokens per second)
llama_print_timings: total time = 35895.91 ms / 41 tokens
{'id': 'cmpl-01e2feb3-c0ff-4a6e-8ca4-b8bf2172da01',
'object': 'text_completion',
'created': 1713912080,
'model': 'meta-llama-3-8b-instruct.Q8_0.gguf',
'choices': [{'text': 'Q: Name the planets in the solar system? A: 1. Mercury, 2. Venus, 3. Earth, 4. Mars, 5. Jupiter, 6. Saturn, 7. Uranus, 8. Neptune.',
'index': 0,
'logprobs': None,
'finish_reason': 'stop'}],
'usage': {'prompt_tokens': 13, 'completion_tokens': 40, 'total_tokens': 53}}
```
## Google Colab
[https://colab.research.google.com/drive/1vhrCKGzY7KP5mScHNUl7hjmbPsUyj_sj?usp=sharing)](https://colab.research.google.com/drive/1vhrCKGzY7KP5mScHNUl7hjmbPsUyj_sj?usp=sharing) | {"language": ["en"], "tags": ["meta", "pytorch", "llama", "llama-3", "llama-cpp", "quantized", "8-bit", "GGUF", "8 Billion", "python", "instruct", "google-colab"], "model_name": "meta-llama-3-8B-instruct-gguf", "pipeline_tag": "text-generation", "inference": false, "model_creator": "sourabhdattawad", "quantized_by": "sourabhdattawad", "license_name": "llama3"} | sourabhdattawad/meta-llama-3-8b-instruct-gguf | null | [
"gguf",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"quantized",
"8-bit",
"GGUF",
"8 Billion",
"python",
"instruct",
"google-colab",
"text-generation",
"en",
"region:us"
] | null | 2024-04-23T21:59:32+00:00 | [] | [
"en"
] | TAGS
#gguf #meta #pytorch #llama #llama-3 #llama-cpp #quantized #8-bit #GGUF #8 Billion #python #instruct #google-colab #text-generation #en #region-us
|
## Usage
Package installation
Download the model:
## Google Colab
URL | [
"## Usage\n\nPackage installation\n\n\n\nDownload the model:",
"## Google Colab\n\nURL"
] | [
"TAGS\n#gguf #meta #pytorch #llama #llama-3 #llama-cpp #quantized #8-bit #GGUF #8 Billion #python #instruct #google-colab #text-generation #en #region-us \n",
"## Usage\n\nPackage installation\n\n\n\nDownload the model:",
"## Google Colab\n\nURL"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "Universal-NER/UniNER-7B-type"} | jc80622/unilora_test | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Universal-NER/UniNER-7B-type",
"region:us"
] | null | 2024-04-23T22:03:23+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-Universal-NER/UniNER-7B-type #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-Universal-NER/UniNER-7B-type #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_5ep
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.1177
- eval_runtime: 2.8618
- eval_samples_per_second: 69.887
- eval_steps_per_second: 8.736
- epoch: 4.992
- step: 390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1 | {"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_5ep", "results": []}]} | mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_5ep | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | 2024-04-23T22:03:42+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us
|
# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_5ep
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.1177
- eval_runtime: 2.8618
- eval_samples_per_second: 69.887
- eval_steps_per_second: 8.736
- epoch: 4.992
- step: 390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1 | [
"# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_5ep\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 2.1177\n- eval_runtime: 2.8618\n- eval_samples_per_second: 69.887\n- eval_steps_per_second: 8.736\n- epoch: 4.992\n- step: 390",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us \n",
"# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr1e-5_5ep\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 2.1177\n- eval_runtime: 2.8618\n- eval_samples_per_second: 69.887\n- eval_steps_per_second: 8.736\n- epoch: 4.992\n- step: 390",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.19.1"
] |
null | null |
# T3qm7xpPercival_01-7B
T3qm7xpPercival_01-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: nlpguy/T3QM7XP
- model: AurelPx/Percival_01-7b-slerp
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/T3qm7xpPercival_01-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]} | automerger/T3qm7xpPercival_01-7B | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null | 2024-04-23T22:03:44+00:00 | [] | [] | TAGS
#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us
|
# T3qm7xpPercival_01-7B
T3qm7xpPercival_01-7B is an automated merge created by Maxime Labonne using the following configuration.
## Configuration
## Usage
| [
"# T3qm7xpPercival_01-7B\n\nT3qm7xpPercival_01-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] | [
"TAGS\n#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us \n",
"# T3qm7xpPercival_01-7B\n\nT3qm7xpPercival_01-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] |
text-generation | transformers | # Average Normie v1

A model by an average normie for the average normie.
This model is a stock merge of the following models:
https://huggingface.co/cgato/L3-TheSpice-8b-v0.1.3
https://huggingface.co/Sao10K/L3-Solana-8B-v1
https://huggingface.co/ResplendentAI/Kei_Llama3_8B
The final merge then had the following LoRA applied over it:
https://huggingface.co/ResplendentAI/Theory_of_Mind_Llama3
This should be an intelligent and adept roleplaying model. | {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["grimulkan/theory-of-mind"], "base_model": ["jeiku/Average_Normie_l3_v0_8B", "ResplendentAI/Theory_of_Mind_Llama3"]} | jeiku/Average_Normie_l3_v1_8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:grimulkan/theory-of-mind",
"base_model:jeiku/Average_Normie_l3_v0_8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T22:04:24+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #conversational #en #dataset-grimulkan/theory-of-mind #base_model-jeiku/Average_Normie_l3_v0_8B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Average Normie v1
!image/png
A model by an average normie for the average normie.
This model is a stock merge of the following models:
URL
URL
URL
The final merge then had the following LoRA applied over it:
URL
This should be an intelligent and adept roleplaying model. | [
"# Average Normie v1\n\n!image/png\n\nA model by an average normie for the average normie. \n\nThis model is a stock merge of the following models:\n\nURL\n\nURL\n\nURL\n\nThe final merge then had the following LoRA applied over it:\n\nURL\n\nThis should be an intelligent and adept roleplaying model."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #en #dataset-grimulkan/theory-of-mind #base_model-jeiku/Average_Normie_l3_v0_8B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Average Normie v1\n\n!image/png\n\nA model by an average normie for the average normie. \n\nThis model is a stock merge of the following models:\n\nURL\n\nURL\n\nURL\n\nThe final merge then had the following LoRA applied over it:\n\nURL\n\nThis should be an intelligent and adept roleplaying model."
] |
text-generation | transformers |
# Model Card for Model ID
Multilingual fine tuned version of LLAMA-3-8B quantized in 4 bits.
## Model Details
### Model Description
Multilingual fine tuned version of LLAMA-3-8B quantized in 4 bits using common open source datasets and showing improvements over multilingual tasks.
It has been used the standard bitquantized technique for post-fine-tuning quantization reducing the computational time complexity and space complexity required to run the model. The overall architecture it's all LLAMA-3 based.
- **Developed by:** Daniele Comi
- **Model type:** LLAMA-3-8B
- **Language(s) (NLP):** Multilingual
- **License:** MIT
- **Finetuned from model:** LLAMA-3-8B | {"language": ["it", "en"], "license": "mit", "library_name": "transformers"} | comidan/llama-3-chat-multilingual-v1-8b | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"it",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T22:06:48+00:00 | [] | [
"it",
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #it #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
Multilingual fine tuned version of LLAMA-3-8B quantized in 4 bits.
## Model Details
### Model Description
Multilingual fine tuned version of LLAMA-3-8B quantized in 4 bits using common open source datasets and showing improvements over multilingual tasks.
It has been used the standard bitquantized technique for post-fine-tuning quantization reducing the computational time complexity and space complexity required to run the model. The overall architecture it's all LLAMA-3 based.
- Developed by: Daniele Comi
- Model type: LLAMA-3-8B
- Language(s) (NLP): Multilingual
- License: MIT
- Finetuned from model: LLAMA-3-8B | [
"# Model Card for Model ID\n\nMultilingual fine tuned version of LLAMA-3-8B quantized in 4 bits.",
"## Model Details",
"### Model Description\n\nMultilingual fine tuned version of LLAMA-3-8B quantized in 4 bits using common open source datasets and showing improvements over multilingual tasks.\nIt has been used the standard bitquantized technique for post-fine-tuning quantization reducing the computational time complexity and space complexity required to run the model. The overall architecture it's all LLAMA-3 based.\n\n\n- Developed by: Daniele Comi\n- Model type: LLAMA-3-8B\n- Language(s) (NLP): Multilingual\n- License: MIT\n- Finetuned from model: LLAMA-3-8B"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #it #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID\n\nMultilingual fine tuned version of LLAMA-3-8B quantized in 4 bits.",
"## Model Details",
"### Model Description\n\nMultilingual fine tuned version of LLAMA-3-8B quantized in 4 bits using common open source datasets and showing improvements over multilingual tasks.\nIt has been used the standard bitquantized technique for post-fine-tuning quantization reducing the computational time complexity and space complexity required to run the model. The overall architecture it's all LLAMA-3 based.\n\n\n- Developed by: Daniele Comi\n- Model type: LLAMA-3-8B\n- Language(s) (NLP): Multilingual\n- License: MIT\n- Finetuned from model: LLAMA-3-8B"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** Mbetyko
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | Mbetyko/basket | null | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T22:07:20+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #llama #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: Mbetyko
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: Mbetyko\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #llama #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: Mbetyko\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CNEC_1_1_robeczech-base
This model is a fine-tuned version of [ufal/robeczech-base](https://huggingface.co/ufal/robeczech-base) on the cnec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3233
- Precision: 0.8580
- Recall: 0.8857
- F1: 0.8716
- Accuracy: 0.9511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3724 | 3.41 | 2000 | 0.3332 | 0.7990 | 0.8230 | 0.8108 | 0.9376 |
| 0.1863 | 6.81 | 4000 | 0.2656 | 0.8515 | 0.8636 | 0.8575 | 0.9455 |
| 0.1109 | 10.22 | 6000 | 0.2575 | 0.8505 | 0.8737 | 0.8619 | 0.9493 |
| 0.068 | 13.63 | 8000 | 0.2804 | 0.8567 | 0.8790 | 0.8677 | 0.9503 |
| 0.0466 | 17.04 | 10000 | 0.2952 | 0.8573 | 0.8830 | 0.8699 | 0.9498 |
| 0.0305 | 20.44 | 12000 | 0.2992 | 0.8618 | 0.8865 | 0.8740 | 0.9520 |
| 0.0231 | 23.85 | 14000 | 0.3272 | 0.8567 | 0.8843 | 0.8703 | 0.9512 |
| 0.02 | 27.26 | 16000 | 0.3233 | 0.8580 | 0.8857 | 0.8716 | 0.9511 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"license": "cc-by-nc-sa-4.0", "tags": ["generated_from_trainer"], "datasets": ["cnec"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "ufal/robeczech-base", "model-index": [{"name": "CNEC_1_1_robeczech-base", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "cnec", "type": "cnec", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "precision", "value": 0.8579982891360137, "name": "Precision"}, {"type": "recall", "value": 0.8856512141280353, "name": "Recall"}, {"type": "f1", "value": 0.8716054746904193, "name": "F1"}, {"type": "accuracy", "value": 0.9511284046692607, "name": "Accuracy"}]}]}]} | stulcrad/CNEC_1_1_robeczech-base | null | [
"transformers",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:cnec",
"base_model:ufal/robeczech-base",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T22:08:33+00:00 | [] | [] | TAGS
#transformers #safetensors #roberta #token-classification #generated_from_trainer #dataset-cnec #base_model-ufal/robeczech-base #license-cc-by-nc-sa-4.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| CNEC\_1\_1\_robeczech-base
==========================
This model is a fine-tuned version of ufal/robeczech-base on the cnec dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3233
* Precision: 0.8580
* Recall: 0.8857
* F1: 0.8716
* Accuracy: 0.9511
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.16.1
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #safetensors #roberta #token-classification #generated_from_trainer #dataset-cnec #base_model-ufal/robeczech-base #license-cc-by-nc-sa-4.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "261.99 +/- 15.60", "name": "mean_reward", "verified": false}]}]}]} | volverine/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-23T22:09:19+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
null | null | Fine-tuned model for generating research papers with Mistral 7B 0.1. Fine-tuned on arXiv documents collected by scraping with the help of the arXiv API.
Will add a longer description later on.
Will add a longer description later on.
Will add a longer description later on. | {"license": "apache-2.0"} | dpetrou00/mistral-paper-generator | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2024-04-23T22:10:07+00:00 | [] | [] | TAGS
#safetensors #license-apache-2.0 #region-us
| Fine-tuned model for generating research papers with Mistral 7B 0.1. Fine-tuned on arXiv documents collected by scraping with the help of the arXiv API.
Will add a longer description later on.
Will add a longer description later on.
Will add a longer description later on. | [] | [
"TAGS\n#safetensors #license-apache-2.0 #region-us \n"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CNEC_2_0_robeczech-base
This model is a fine-tuned version of [ufal/robeczech-base](https://huggingface.co/ufal/robeczech-base) on the cnec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3306
- Precision: 0.8531
- Recall: 0.8848
- F1: 0.8687
- Accuracy: 0.9545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4499 | 2.22 | 2000 | 0.3871 | 0.7163 | 0.7099 | 0.7131 | 0.9222 |
| 0.2342 | 4.44 | 4000 | 0.2576 | 0.8149 | 0.8251 | 0.8200 | 0.9451 |
| 0.1449 | 6.67 | 6000 | 0.2407 | 0.8231 | 0.8523 | 0.8375 | 0.9492 |
| 0.1027 | 8.89 | 8000 | 0.2267 | 0.8362 | 0.8748 | 0.8551 | 0.9527 |
| 0.0751 | 11.11 | 10000 | 0.2429 | 0.8394 | 0.8712 | 0.8550 | 0.9522 |
| 0.0473 | 13.33 | 12000 | 0.2633 | 0.8439 | 0.8720 | 0.8577 | 0.9535 |
| 0.0369 | 15.56 | 14000 | 0.2821 | 0.8468 | 0.8755 | 0.8609 | 0.9541 |
| 0.0286 | 17.78 | 16000 | 0.2797 | 0.8534 | 0.8827 | 0.8678 | 0.9558 |
| 0.0234 | 20.0 | 18000 | 0.2860 | 0.8550 | 0.8834 | 0.8690 | 0.9558 |
| 0.0168 | 22.22 | 20000 | 0.3146 | 0.8471 | 0.8795 | 0.8630 | 0.9531 |
| 0.0142 | 24.44 | 22000 | 0.3165 | 0.8488 | 0.8816 | 0.8649 | 0.9530 |
| 0.011 | 26.67 | 24000 | 0.3291 | 0.8518 | 0.8816 | 0.8664 | 0.9537 |
| 0.0092 | 28.89 | 26000 | 0.3306 | 0.8531 | 0.8848 | 0.8687 | 0.9545 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"license": "cc-by-nc-sa-4.0", "tags": ["generated_from_trainer"], "datasets": ["cnec"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "ufal/robeczech-base", "model-index": [{"name": "CNEC_2_0_robeczech-base", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "cnec", "type": "cnec", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "precision", "value": 0.853103448275862, "name": "Precision"}, {"type": "recall", "value": 0.8848354792560801, "name": "Recall"}, {"type": "f1", "value": 0.8686797752808989, "name": "F1"}, {"type": "accuracy", "value": 0.954457738324971, "name": "Accuracy"}]}]}]} | stulcrad/CNEC_2_0_robeczech-base | null | [
"transformers",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:cnec",
"base_model:ufal/robeczech-base",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T22:10:33+00:00 | [] | [] | TAGS
#transformers #safetensors #roberta #token-classification #generated_from_trainer #dataset-cnec #base_model-ufal/robeczech-base #license-cc-by-nc-sa-4.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| CNEC\_2\_0\_robeczech-base
==========================
This model is a fine-tuned version of ufal/robeczech-base on the cnec dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3306
* Precision: 0.8531
* Recall: 0.8848
* F1: 0.8687
* Accuracy: 0.9545
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.16.1
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #safetensors #roberta #token-classification #generated_from_trainer #dataset-cnec #base_model-ufal/robeczech-base #license-cc-by-nc-sa-4.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | FranchRamp/bert-finetuned-ner4 | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T22:12:26+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #tensorboard #safetensors #bert #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # meta-llama/Meta-Llama-3-8B AWQ
- Model creator: [meta-llama](https://huggingface.co/meta-llama)
- Original model: [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/Meta-Llama-3-8B-AWQ"
system_message = "You are Meta-Llama-3-8B, incarnated as a powerful AI. You were created by meta-llama."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
| {"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/Meta-Llama-3-8B-AWQ | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T22:13:30+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # meta-llama/Meta-Llama-3-8B AWQ
- Model creator: meta-llama
- Original model: Meta-Llama-3-8B
## How to use
### Install the necessary packages
### Example Python code
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- Text Generation Webui - using Loader: AutoAWQ
- vLLM - version 0.2.2 or later for support for all model types.
- Hugging Face Text Generation Inference (TGI)
- Transformers version 4.35.0 and later, from any code or client that supports Transformers
- AutoAWQ - for use from Python code
| [
"# meta-llama/Meta-Llama-3-8B AWQ\n\n- Model creator: meta-llama\n- Original model: Meta-Llama-3-8B",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# meta-llama/Meta-Llama-3-8B AWQ\n\n- Model creator: meta-llama\n- Original model: Meta-Llama-3-8B",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned__roberta-base-biomedical-clinical-es__augmented-ultrasounds-ner
This model is a fine-tuned version of [manucos/finetuned__roberta-base-biomedical-clinical-es__augmented-ultrasounds](https://huggingface.co/manucos/finetuned__roberta-base-biomedical-clinical-es__augmented-ultrasounds) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3648
- Precision: 0.8205
- Recall: 0.8927
- F1: 0.8551
- Accuracy: 0.9264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 206 | 0.2948 | 0.7527 | 0.8411 | 0.7945 | 0.9132 |
| No log | 2.0 | 412 | 0.2572 | 0.7746 | 0.8522 | 0.8116 | 0.9235 |
| 0.4194 | 3.0 | 618 | 0.2866 | 0.7759 | 0.8482 | 0.8104 | 0.9215 |
| 0.4194 | 4.0 | 824 | 0.2813 | 0.7878 | 0.8866 | 0.8343 | 0.9235 |
| 0.0971 | 5.0 | 1030 | 0.2902 | 0.7969 | 0.8856 | 0.8389 | 0.9249 |
| 0.0971 | 6.0 | 1236 | 0.3229 | 0.8055 | 0.8846 | 0.8432 | 0.9239 |
| 0.0971 | 7.0 | 1442 | 0.3422 | 0.8028 | 0.8775 | 0.8385 | 0.9208 |
| 0.0459 | 8.0 | 1648 | 0.3215 | 0.8297 | 0.8877 | 0.8577 | 0.9253 |
| 0.0459 | 9.0 | 1854 | 0.3568 | 0.8119 | 0.8866 | 0.8476 | 0.9235 |
| 0.0285 | 10.0 | 2060 | 0.3520 | 0.8145 | 0.8887 | 0.8500 | 0.9235 |
| 0.0285 | 11.0 | 2266 | 0.3597 | 0.8255 | 0.8907 | 0.8569 | 0.9264 |
| 0.0285 | 12.0 | 2472 | 0.3599 | 0.8183 | 0.8887 | 0.8520 | 0.9266 |
| 0.0203 | 13.0 | 2678 | 0.3612 | 0.8195 | 0.8917 | 0.8541 | 0.9246 |
| 0.0203 | 14.0 | 2884 | 0.3649 | 0.8180 | 0.8917 | 0.8533 | 0.9258 |
| 0.0164 | 15.0 | 3090 | 0.3648 | 0.8205 | 0.8927 | 0.8551 | 0.9264 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "manucos/finetuned__roberta-base-biomedical-clinical-es__augmented-ultrasounds", "model-index": [{"name": "finetuned__roberta-base-biomedical-clinical-es__augmented-ultrasounds-ner", "results": []}]} | manucos/finetuned__roberta-base-biomedical-clinical-es__augmented-ultrasounds-ner | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:manucos/finetuned__roberta-base-biomedical-clinical-es__augmented-ultrasounds",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T22:14:38+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #roberta #token-classification #generated_from_trainer #base_model-manucos/finetuned__roberta-base-biomedical-clinical-es__augmented-ultrasounds #autotrain_compatible #endpoints_compatible #region-us
| finetuned\_\_roberta-base-biomedical-clinical-es\_\_augmented-ultrasounds-ner
=============================================================================
This model is a fine-tuned version of manucos/finetuned\_\_roberta-base-biomedical-clinical-es\_\_augmented-ultrasounds on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3648
* Precision: 0.8205
* Recall: 0.8927
* F1: 0.8551
* Accuracy: 0.9264
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #token-classification #generated_from_trainer #base_model-manucos/finetuned__roberta-base-biomedical-clinical-es__augmented-ultrasounds #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | annavtkn/rubert_sentiment_classification_model | null | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T22:16:48+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #distilbert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #distilbert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "diffusers"} | rubbrband/asianBrmBeautyrealmix_v10 | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-23T22:18:53+00:00 | [
"1910.09700"
] | [] | TAGS
#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"language": ["zh", "en"], "license": "llama3"} | LeeZande/Egg1 | null | [
"zh",
"en",
"arxiv:1910.09700",
"license:llama3",
"region:us"
] | null | 2024-04-23T22:19:10+00:00 | [
"1910.09700"
] | [
"zh",
"en"
] | TAGS
#zh #en #arxiv-1910.09700 #license-llama3 #region-us
|
# Model Card for Model ID
This modelcard aims to be a base template for new models. It has been generated using this raw template.
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#zh #en #arxiv-1910.09700 #license-llama3 #region-us \n",
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral7binstruct_summarize
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5569 | 0.2119 | 25 | 0.4059 |
| 0.362 | 0.4237 | 50 | 0.3475 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistral7binstruct_summarize", "results": []}]} | prasannab2001/mistral7binstruct_summarize | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-23T22:24:53+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
| mistral7binstruct\_summarize
============================
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3475
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 1
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: constant
* lr\_scheduler\_warmup\_steps: 0.03
* training\_steps: 50
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 50",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 50",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
OpenVINO IR with int4 quantization.
To use on LocalAI use the following model definition:
```
name: phi3
backend: transformers
parameters:
model: fakezeta/Phi-3-mini-128k-instruct-ov-int4
context_size: 131072
threads: 6
trust_remote_code: true
type: OVModelForCausalLM
template:
use_tokenizer_template: true
stopwords:
- <|end|>
```
## Model Summary
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ Phi-3 GGUF: [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat).
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
messages = [
{"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| | Phi-3-Mini-4K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 |
|---|---|---|---|---|---|---|---|---|---|
| MMLU <br>5-Shot | 68.8 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 |
| HellaSwag <br> 5-Shot | 76.7 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 |
| ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 |
| GSM-8K <br> 0-Shot; CoT | 82.5 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 |
| MedQA <br> 2-Shot | 53.8 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 |
| AGIEval <br> 0-Shot | 37.5 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 |
| TriviaQA <br> 5-Shot | 64.0 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 |
| Arc-C <br> 10-Shot | 84.9 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 |
| Arc-E <br> 10-Shot | 94.6 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 |
| PIQA <br> 5-Shot | 84.2 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 |
| SociQA <br> 5-Shot | 76.6 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 |
| BigBench-Hard <br> 0-Shot | 71.7 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 |
| WinoGrande <br> 5-Shot | 70.8 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 |
| OpenBookQA <br> 10-Shot | 83.2 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 |
| BoolQ <br> 0-Shot | 77.6 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 |
| CommonSenseQA <br> 10-Shot | 80.2 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 |
| TruthfulQA <br> 10-Shot | 65.0 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 |
| HumanEval <br> 0-Shot | 59.1 | 59.1 | 54.7 | 59.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 |
| MBPP <br> 3-Shot | 53.8 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 |
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies. | {"license": "mit"} | fakezeta/Phi-3-mini-128k-instruct-ov-int4 | null | [
"transformers",
"openvino",
"phi3",
"text-generation",
"conversational",
"custom_code",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T22:29:31+00:00 | [] | [] | TAGS
#transformers #openvino #phi3 #text-generation #conversational #custom_code #license-mit #autotrain_compatible #endpoints_compatible #region-us
| OpenVINO IR with int4 quantization.
To use on LocalAI use the following model definition:
Model Summary
-------------
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
* Phi-3 Microsoft Blog
* Phi-3 Technical Report
* Phi-3 on Azure AI Studio
* Phi-3 GGUF: 4K
* Phi-3 ONNX: 4K
Intended Uses
-------------
Primary use cases
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1. Memory/compute constrained environments
2. Latency bound scenarios
3. Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
Use case considerations
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
How to Use
----------
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of 'transformers'. Until the official version is released through 'pip', ensure that you are doing one of the following:
* When loading the model, ensure that 'trust\_remote\_code=True' is passed as an argument of the 'from\_pretrained()' function.
* Update your local 'transformers' to the development version: 'pip uninstall -y transformers && pip install git+URL The previous command is an alternative to cloning and installing from the source.
The current 'transformers' version can be verified with: 'pip list | grep transformers'.
Phi-3 Mini-4K-Instruct is also available in HuggingChat.
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
For example:
where the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
Responsible AI Considerations
-----------------------------
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
* Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
* Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
* Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
* Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
* Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
* Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
* High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
* Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
* Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
* Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
Training
--------
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.
Benchmarks
----------
We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
Software
--------
* PyTorch
* DeepSpeed
* Transformers
* Flash-Attention
Hardware
--------
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from\_pretrained() with attn\_implementation="eager"
* CPU: use the GGUF quantized models 4K
* Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K
Cross Platform Support
----------------------
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
License
-------
The model is licensed under the MIT license.
Trademarks
----------
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"### Chat Format\n\n\nGiven the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.\nYou can provide the prompt as a question with a generic template as follow:\n\n\nFor example:\n\n\nwhere the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:",
"### Sample inference code\n\n\nThis code snippets show how to get quickly started with running the model on a GPU:\n\n\nResponsible AI Considerations\n-----------------------------\n\n\nLike other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:\n\n\n* Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.\n* Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.\n* Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.\n* Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.\n* Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n\nDevelopers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:\n\n\n* Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.\n* High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.\n* Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).\n* Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.\n* Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.\n\n\nTraining\n--------",
"### Model\n\n\n* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.\n* Inputs: Text. It is best suited for prompts using chat format.\n* Context length: 4K tokens\n* GPUs: 512 H100-80G\n* Training time: 7 days\n* Training data: 3.3T tokens\n* Outputs: Generated text in response to the input\n* Dates: Our models were trained between February and April 2024\n* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.",
"### Datasets\n\n\nOur training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of\n\n\n1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;\n2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);\n3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.",
"### Fine-tuning\n\n\nA basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.\n\n\nBenchmarks\n----------\n\n\nWe report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.\n\n\nAll the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.\n\n\nAs is now standard, we use few-shot prompts to evaluate the models, at temperature 0.\nThe prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.\nMore specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.\n\n\nThe number of k–shot examples is listed per-benchmark.\n\n\n\nSoftware\n--------\n\n\n* PyTorch\n* DeepSpeed\n* Transformers\n* Flash-Attention\n\n\nHardware\n--------\n\n\nNote that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:\n\n\n* NVIDIA A100\n* NVIDIA A6000\n* NVIDIA H100\n\n\nIf you want to run the model on:\n\n\n* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from\\_pretrained() with attn\\_implementation=\"eager\"\n* CPU: use the GGUF quantized models 4K\n\n\n* Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K\n\n\nCross Platform Support\n----------------------\n\n\nONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.\n\n\nOptimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. \n\nAlong with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.\n\n\nHere are some of the optimized configurations we have added:\n\n\n1. ONNX models for int4 DML: Quantized to int4 via AWQ\n2. ONNX model for fp16 CUDA\n3. ONNX model for int4 CUDA: Quantized to int4 via RTN\n4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN\n\n\nLicense\n-------\n\n\nThe model is licensed under the MIT license.\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] | [
"TAGS\n#transformers #openvino #phi3 #text-generation #conversational #custom_code #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Chat Format\n\n\nGiven the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.\nYou can provide the prompt as a question with a generic template as follow:\n\n\nFor example:\n\n\nwhere the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:",
"### Sample inference code\n\n\nThis code snippets show how to get quickly started with running the model on a GPU:\n\n\nResponsible AI Considerations\n-----------------------------\n\n\nLike other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:\n\n\n* Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.\n* Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.\n* Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.\n* Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.\n* Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n\nDevelopers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:\n\n\n* Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.\n* High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.\n* Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).\n* Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.\n* Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.\n\n\nTraining\n--------",
"### Model\n\n\n* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.\n* Inputs: Text. It is best suited for prompts using chat format.\n* Context length: 4K tokens\n* GPUs: 512 H100-80G\n* Training time: 7 days\n* Training data: 3.3T tokens\n* Outputs: Generated text in response to the input\n* Dates: Our models were trained between February and April 2024\n* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.",
"### Datasets\n\n\nOur training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of\n\n\n1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;\n2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);\n3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.",
"### Fine-tuning\n\n\nA basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.\n\n\nBenchmarks\n----------\n\n\nWe report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.\n\n\nAll the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.\n\n\nAs is now standard, we use few-shot prompts to evaluate the models, at temperature 0.\nThe prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.\nMore specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.\n\n\nThe number of k–shot examples is listed per-benchmark.\n\n\n\nSoftware\n--------\n\n\n* PyTorch\n* DeepSpeed\n* Transformers\n* Flash-Attention\n\n\nHardware\n--------\n\n\nNote that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:\n\n\n* NVIDIA A100\n* NVIDIA A6000\n* NVIDIA H100\n\n\nIf you want to run the model on:\n\n\n* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from\\_pretrained() with attn\\_implementation=\"eager\"\n* CPU: use the GGUF quantized models 4K\n\n\n* Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K\n\n\nCross Platform Support\n----------------------\n\n\nONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.\n\n\nOptimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. \n\nAlong with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.\n\n\nHere are some of the optimized configurations we have added:\n\n\n1. ONNX models for int4 DML: Quantized to int4 via AWQ\n2. ONNX model for fp16 CUDA\n3. ONNX model for int4 CUDA: Quantized to int4 via RTN\n4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN\n\n\nLicense\n-------\n\n\nThe model is licensed under the MIT license.\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | El-chapoo/Llama_GQA-100m | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T22:30:36+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
feature-extraction | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | dayoon/e5_new_loss_epoch1_from_mel | null | [
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T22:33:06+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #xlm-roberta #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #xlm-roberta #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
OpenVINO IR with int8 quantization.
To use on LocalAI use the following model definition:
```
name: phi3
backend: transformers
parameters:
model: fakezeta/Phi-3-mini-128k-instruct-ov-int8
context_size: 131072
threads: 6
trust_remote_code: true
type: OVModelForCausalLM
template:
use_tokenizer_template: true
stopwords:
- <|end|>
```
## Model Summary
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ Phi-3 GGUF: [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat).
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
messages = [
{"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| | Phi-3-Mini-4K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 |
|---|---|---|---|---|---|---|---|---|---|
| MMLU <br>5-Shot | 68.8 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 |
| HellaSwag <br> 5-Shot | 76.7 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 |
| ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 |
| GSM-8K <br> 0-Shot; CoT | 82.5 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 |
| MedQA <br> 2-Shot | 53.8 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 |
| AGIEval <br> 0-Shot | 37.5 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 |
| TriviaQA <br> 5-Shot | 64.0 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 |
| Arc-C <br> 10-Shot | 84.9 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 |
| Arc-E <br> 10-Shot | 94.6 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 |
| PIQA <br> 5-Shot | 84.2 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 |
| SociQA <br> 5-Shot | 76.6 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 |
| BigBench-Hard <br> 0-Shot | 71.7 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 |
| WinoGrande <br> 5-Shot | 70.8 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 |
| OpenBookQA <br> 10-Shot | 83.2 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 |
| BoolQ <br> 0-Shot | 77.6 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 |
| CommonSenseQA <br> 10-Shot | 80.2 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 |
| TruthfulQA <br> 10-Shot | 65.0 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 |
| HumanEval <br> 0-Shot | 59.1 | 59.1 | 54.7 | 59.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 |
| MBPP <br> 3-Shot | 53.8 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 |
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies. | {"license": "mit"} | fakezeta/Phi-3-mini-128k-instruct-ov-int8 | null | [
"transformers",
"openvino",
"phi3",
"text-generation",
"conversational",
"custom_code",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T22:34:36+00:00 | [] | [] | TAGS
#transformers #openvino #phi3 #text-generation #conversational #custom_code #license-mit #autotrain_compatible #endpoints_compatible #region-us
| OpenVINO IR with int8 quantization.
To use on LocalAI use the following model definition:
Model Summary
-------------
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
* Phi-3 Microsoft Blog
* Phi-3 Technical Report
* Phi-3 on Azure AI Studio
* Phi-3 GGUF: 4K
* Phi-3 ONNX: 4K
Intended Uses
-------------
Primary use cases
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1. Memory/compute constrained environments
2. Latency bound scenarios
3. Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
Use case considerations
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
How to Use
----------
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of 'transformers'. Until the official version is released through 'pip', ensure that you are doing one of the following:
* When loading the model, ensure that 'trust\_remote\_code=True' is passed as an argument of the 'from\_pretrained()' function.
* Update your local 'transformers' to the development version: 'pip uninstall -y transformers && pip install git+URL The previous command is an alternative to cloning and installing from the source.
The current 'transformers' version can be verified with: 'pip list | grep transformers'.
Phi-3 Mini-4K-Instruct is also available in HuggingChat.
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
For example:
where the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
Responsible AI Considerations
-----------------------------
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
* Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
* Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
* Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
* Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
* Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
* Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
* High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
* Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
* Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
* Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
Training
--------
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.
Benchmarks
----------
We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
Software
--------
* PyTorch
* DeepSpeed
* Transformers
* Flash-Attention
Hardware
--------
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from\_pretrained() with attn\_implementation="eager"
* CPU: use the GGUF quantized models 4K
* Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K
Cross Platform Support
----------------------
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
License
-------
The model is licensed under the MIT license.
Trademarks
----------
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"### Chat Format\n\n\nGiven the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.\nYou can provide the prompt as a question with a generic template as follow:\n\n\nFor example:\n\n\nwhere the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:",
"### Sample inference code\n\n\nThis code snippets show how to get quickly started with running the model on a GPU:\n\n\nResponsible AI Considerations\n-----------------------------\n\n\nLike other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:\n\n\n* Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.\n* Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.\n* Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.\n* Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.\n* Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n\nDevelopers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:\n\n\n* Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.\n* High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.\n* Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).\n* Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.\n* Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.\n\n\nTraining\n--------",
"### Model\n\n\n* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.\n* Inputs: Text. It is best suited for prompts using chat format.\n* Context length: 4K tokens\n* GPUs: 512 H100-80G\n* Training time: 7 days\n* Training data: 3.3T tokens\n* Outputs: Generated text in response to the input\n* Dates: Our models were trained between February and April 2024\n* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.",
"### Datasets\n\n\nOur training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of\n\n\n1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;\n2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);\n3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.",
"### Fine-tuning\n\n\nA basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.\n\n\nBenchmarks\n----------\n\n\nWe report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.\n\n\nAll the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.\n\n\nAs is now standard, we use few-shot prompts to evaluate the models, at temperature 0.\nThe prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.\nMore specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.\n\n\nThe number of k–shot examples is listed per-benchmark.\n\n\n\nSoftware\n--------\n\n\n* PyTorch\n* DeepSpeed\n* Transformers\n* Flash-Attention\n\n\nHardware\n--------\n\n\nNote that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:\n\n\n* NVIDIA A100\n* NVIDIA A6000\n* NVIDIA H100\n\n\nIf you want to run the model on:\n\n\n* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from\\_pretrained() with attn\\_implementation=\"eager\"\n* CPU: use the GGUF quantized models 4K\n\n\n* Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K\n\n\nCross Platform Support\n----------------------\n\n\nONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.\n\n\nOptimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. \n\nAlong with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.\n\n\nHere are some of the optimized configurations we have added:\n\n\n1. ONNX models for int4 DML: Quantized to int4 via AWQ\n2. ONNX model for fp16 CUDA\n3. ONNX model for int4 CUDA: Quantized to int4 via RTN\n4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN\n\n\nLicense\n-------\n\n\nThe model is licensed under the MIT license.\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] | [
"TAGS\n#transformers #openvino #phi3 #text-generation #conversational #custom_code #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Chat Format\n\n\nGiven the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.\nYou can provide the prompt as a question with a generic template as follow:\n\n\nFor example:\n\n\nwhere the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:",
"### Sample inference code\n\n\nThis code snippets show how to get quickly started with running the model on a GPU:\n\n\nResponsible AI Considerations\n-----------------------------\n\n\nLike other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:\n\n\n* Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.\n* Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.\n* Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.\n* Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.\n* Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n\nDevelopers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:\n\n\n* Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.\n* High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.\n* Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).\n* Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.\n* Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.\n\n\nTraining\n--------",
"### Model\n\n\n* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.\n* Inputs: Text. It is best suited for prompts using chat format.\n* Context length: 4K tokens\n* GPUs: 512 H100-80G\n* Training time: 7 days\n* Training data: 3.3T tokens\n* Outputs: Generated text in response to the input\n* Dates: Our models were trained between February and April 2024\n* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.",
"### Datasets\n\n\nOur training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of\n\n\n1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;\n2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);\n3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.",
"### Fine-tuning\n\n\nA basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.\n\n\nBenchmarks\n----------\n\n\nWe report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.\n\n\nAll the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.\n\n\nAs is now standard, we use few-shot prompts to evaluate the models, at temperature 0.\nThe prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.\nMore specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.\n\n\nThe number of k–shot examples is listed per-benchmark.\n\n\n\nSoftware\n--------\n\n\n* PyTorch\n* DeepSpeed\n* Transformers\n* Flash-Attention\n\n\nHardware\n--------\n\n\nNote that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:\n\n\n* NVIDIA A100\n* NVIDIA A6000\n* NVIDIA H100\n\n\nIf you want to run the model on:\n\n\n* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from\\_pretrained() with attn\\_implementation=\"eager\"\n* CPU: use the GGUF quantized models 4K\n\n\n* Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K\n\n\nCross Platform Support\n----------------------\n\n\nONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.\n\n\nOptimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. \n\nAlong with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.\n\n\nHere are some of the optimized configurations we have added:\n\n\n1. ONNX models for int4 DML: Quantized to int4 via AWQ\n2. ONNX model for fp16 CUDA\n3. ONNX model for int4 CUDA: Quantized to int4 via RTN\n4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN\n\n\nLicense\n-------\n\n\nThe model is licensed under the MIT license.\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | akankshya107/llava_dpt_1 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T22:35:41+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "266.77 +/- 19.25", "name": "mean_reward", "verified": false}]}]}]} | cmattoon/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-23T22:36:41+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | EpicJhon/llama_255 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T22:38:46+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# jeiku/Average_Normie_l3_v1_8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`jeiku/Average_Normie_l3_v1_8B`](https://huggingface.co/jeiku/Average_Normie_l3_v1_8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/jeiku/Average_Normie_l3_v1_8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo jeiku/Average_Normie_l3_v1_8B-Q4_K_M-GGUF --model average_normie_l3_v1_8b.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo jeiku/Average_Normie_l3_v1_8B-Q4_K_M-GGUF --model average_normie_l3_v1_8b.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m average_normie_l3_v1_8b.Q4_K_M.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["grimulkan/theory-of-mind"], "base_model": ["jeiku/Average_Normie_l3_v0_8B", "ResplendentAI/Theory_of_Mind_Llama3"]} | jeiku/Average_Normie_l3_v1_8B-Q4_K_M-GGUF | null | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:grimulkan/theory-of-mind",
"base_model:jeiku/Average_Normie_l3_v0_8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T22:39:54+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama-cpp #gguf-my-repo #en #dataset-grimulkan/theory-of-mind #base_model-jeiku/Average_Normie_l3_v0_8B #license-apache-2.0 #endpoints_compatible #region-us
|
# jeiku/Average_Normie_l3_v1_8B-Q4_K_M-GGUF
This model was converted to GGUF format from 'jeiku/Average_Normie_l3_v1_8B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# jeiku/Average_Normie_l3_v1_8B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'jeiku/Average_Normie_l3_v1_8B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #en #dataset-grimulkan/theory-of-mind #base_model-jeiku/Average_Normie_l3_v0_8B #license-apache-2.0 #endpoints_compatible #region-us \n",
"# jeiku/Average_Normie_l3_v1_8B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'jeiku/Average_Normie_l3_v1_8B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
# Llama 3 8B Instruct – OmniQuant
Based on [Llama 3 8B Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct).
Quantized with [OmniQuant](https://github.com/OpenGVLab/OmniQuant).
## Evaluation
### PPL (↓)
| | wiki |
| --------- | ---- |
| FP | 8,29 |
| **Quantized** | 8,97 |
### Accuracy on English Benchmarks, % (↑)
| | piqa | arc_easy | arc_challenge | boolq | hellaswag | winogrande | mmlu_humanities | mmlu_social_sciences | mmlu_stem | mmlu_other |
| --------- | ---- | -------- | ------------- | ----- | --------- | ---------- | --------------- | -------------------- | --------- | ---------- |
| FP | 78,7 | 81,6 | 53,0 | 83,1 | 57,7 | 72,1 | 67,0 | 70,9 | 54,5 | 68,2 |
| **Quantized** | 77,2 | 80,7 | 51,8 | 82,8 | 56,8 | 72,5 | 63,4 | 67,6 | 50,1 | 65,0 |
### Accuracy on Russian Benchmarks, % (↑)
| | danetqa | terra | rwsd | muserc | rucos | lidirus | parus | rcb | russe | rucola |
| --------- | ------- | ----- | ---- | ------ | ----- | ------- | ----- | ---- | ----- | ------ |
| FP | 78,6 | 60,9 | 65,7 | 56,1 | 64,9 | 63,2 | 71,0 | 34,1 | 60,8 | 64,1 |
| **Quantized** | 71,6 | 60,6 | 52,5 | 63,7 | 57,3 | 57,2 | 74,0 | 33,6 | 36,9 | 67,5 |
### Summary
| | Avg acc diff on Eng, % (↑) | Avg acc diff on Rus, % (↑) | Occupied disk space, % (↓) |
| --------- | -------------------------- | -------------------------- | -------------------------- |
| FP | 0 | 0 | 100 |
| **Quantized** | \-1,9 | \-4,5 | 35,7 |
## Examples
### Imports and Model Loading
<details>
<summary>Expand</summary>
```python
import gc
import auto_gptq.nn_modules.qlinear.qlinear_cuda as qlinear_cuda
import auto_gptq.nn_modules.qlinear.qlinear_triton as qlinear_triton
import torch
from accelerate import (
init_empty_weights,
infer_auto_device_map,
load_checkpoint_in_model,
)
from tqdm import tqdm
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
pipeline,
)
def get_named_linears(model):
return {
name: module for name, module in model.named_modules()
if isinstance(module, torch.nn.Linear)
}
def set_module(model, name, module):
parent = model
levels = name.split('.')
for i in range(len(levels) - 1):
cur_name = levels[i]
if cur_name.isdigit():
parent = parent[int(cur_name)]
else:
parent = getattr(parent, cur_name)
setattr(parent, levels[-1], module)
def load_model(model_path):
# Based on: https://github.com/OpenGVLab/OmniQuant/blob/main/runing_quantized_mixtral_7bx8.ipynb
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
if not hasattr(config, 'quantization_config'):
raise AttributeError(
f'No quantization info found in model config "{model_path}"'
f' (`quantization_config` section is missing).'
)
wbits = config.quantization_config['bits']
group_size = config.quantization_config['group_size']
# We are going to init an ordinary model and then manually replace all Linears with QuantLinears
del config.quantization_config
with init_empty_weights():
model = AutoModelForCausalLM.from_config(config=config, torch_dtype=torch.float16, trust_remote_code=True)
layers = model.model.layers
for i in tqdm(range(len(layers))):
layer = layers[i]
named_linears = get_named_linears(layer)
for name, module in named_linears.items():
params = (
wbits, group_size,
module.in_features, module.out_features,
module.bias is not None
)
if wbits in [2, 4]:
q_linear = qlinear_triton.QuantLinear(*params)
elif wbits == 3:
q_linear = qlinear_cuda.QuantLinear(*params)
else:
raise NotImplementedError("Only 2, 3 and 4 bits are supported.")
q_linear.to(next(layer.parameters()).device)
set_module(layer, name, q_linear)
torch.cuda.empty_cache()
gc.collect()
model.tie_weights()
device_map = infer_auto_device_map(model)
print("Loading pre-computed quantized weights...")
load_checkpoint_in_model(
model, checkpoint=model_path,
device_map=device_map, offload_state_dict=True,
)
print("Model loaded successfully!")
return model
```
</details>
### Inference
```python
model_path = "compressa-ai/Llama-3-8B-Instruct-OmniQuant"
model = load_model(model_path).cuda()
tokenizer = AutoTokenizer.from_pretrained(
model_path, use_fast=False, trust_remote_code=True
)
# Llama 3 "specifics"
# https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/discussions/4
terminators = [
tokenizer.convert_tokens_to_ids("<|end_of_text|>"),
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
system_message = "You are a friendly chatbot who always responds in the style of a pirate."
user_message = "Where are we going, Captain?"
messages = [
{"role": "system", "content": system_message},
{"role": "user", "content": user_message},
]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
inputs = tokenizer(prompt, return_tensors="pt")
inputs = {k: v.cuda() for k, v in inputs.items()}
outputs = model.generate(
**inputs, max_new_tokens=512,
do_sample=True, temperature=0.7, top_p=0.95,
eos_token_id=terminators,
)
response = tokenizer.decode(outputs[0])
continuation = response.removeprefix(prompt).removesuffix(tokenizer.eos_token)
print(f'Prompt:\n{prompt}')
print(f'Continuation:\n{continuation}\n')
```
### Inference Using Pipeline
```python
pipe = pipeline(
"text-generation",
model=model, tokenizer=tokenizer,
eos_token_id=terminators,
max_new_tokens=512, do_sample=True,
temperature=0.7, top_p=0.95,
device=0,
)
prompt = pipe.tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
outputs = pipe(prompt)
response = outputs[0]["generated_text"]
continuation = response.removeprefix(prompt)
print(f'Prompt:\n{prompt}')
print(f'Continuation:\n{continuation}\n')
```
| {"license": "other", "tags": ["llama3", "omniquant", "gptq", "triton"], "base_model": "NousResearch/Meta-Llama-3-8B-Instruct", "model_type": "llama", "pipeline_tag": "text-generation", "quantized_by": "Compressa", "license_name": "llama3", "license_link": "https://llama.meta.com/llama3/license"} | compressa-ai/Llama-3-8B-Instruct-OmniQuant | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama3",
"omniquant",
"gptq",
"triton",
"conversational",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-23T22:40:25+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #llama3 #omniquant #gptq #triton #conversational #base_model-NousResearch/Meta-Llama-3-8B-Instruct #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Llama 3 8B Instruct – OmniQuant
===============================
Based on Llama 3 8B Instruct.
Quantized with OmniQuant.
Evaluation
----------
### PPL (↓)
### Accuracy on English Benchmarks, % (↑)
### Accuracy on Russian Benchmarks, % (↑)
### Summary
Examples
--------
### Imports and Model Loading
Expand
### Inference
### Inference Using Pipeline
| [
"### PPL (↓)",
"### Accuracy on English Benchmarks, % (↑)",
"### Accuracy on Russian Benchmarks, % (↑)",
"### Summary\n\n\n\nExamples\n--------",
"### Imports and Model Loading\n\n\n\nExpand",
"### Inference",
"### Inference Using Pipeline"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #llama3 #omniquant #gptq #triton #conversational #base_model-NousResearch/Meta-Llama-3-8B-Instruct #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### PPL (↓)",
"### Accuracy on English Benchmarks, % (↑)",
"### Accuracy on Russian Benchmarks, % (↑)",
"### Summary\n\n\n\nExamples\n--------",
"### Imports and Model Loading\n\n\n\nExpand",
"### Inference",
"### Inference Using Pipeline"
] |
text-generation | transformers | # llama-3-slerp-kraut-dragon-8B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct)
* [nbeerbower/llama-3-dragonmaid-8B](https://huggingface.co/nbeerbower/llama-3-dragonmaid-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
layer_range: [0, 32]
- model: nbeerbower/llama-3-dragonmaid-8B
layer_range: [0, 32]
merge_method: slerp
base_model: nbeerbower/llama-3-dragonmaid-8B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"license": "other", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct", "nbeerbower/llama-3-dragonmaid-8B"], "license_name": "llama3"} | nbeerbower/llama-3-slerp-kraut-dragon-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct",
"base_model:nbeerbower/llama-3-dragonmaid-8B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T22:46:13+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct #base_model-nbeerbower/llama-3-dragonmaid-8B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # llama-3-slerp-kraut-dragon-8B
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
* nbeerbower/llama-3-dragonmaid-8B
### Configuration
The following YAML configuration was used to produce this model:
| [
"# llama-3-slerp-kraut-dragon-8B\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct\n* nbeerbower/llama-3-dragonmaid-8B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct #base_model-nbeerbower/llama-3-dragonmaid-8B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# llama-3-slerp-kraut-dragon-8B\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct\n* nbeerbower/llama-3-dragonmaid-8B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers |
**I messed up on this model** and forgot to add special tokens for <|im_start|> & <|im_end|> - but it still seems to work fine. If you're having any issues let me know in the Discussions tab and I can retrain it.
<div style="background-color: rgb(255, 241, 209); color: rgb(90, 65, 0); border-radius: 8px; padding: 0.5rem 1rem; ">
<b>This model is usable, but you can get better results with the fixed latest model!</b> Check out the <a href="https://huggingface.co/mrfakename/refusal" style="color: rgb(90, 65, 0);">latest model</a> instead of using this one!
</div>
---
A tiny 1B model that refuses basically anything you ask it! Trained on the [refusal](https://huggingface.co/datasets/mrfakename/refusal) dataset. Prompt format is ChatML.
Training results:
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4352 | 0.0580 | 1 | 2.4462 |
| 1.5742 | 0.5217 | 9 | 1.4303 |
| 1.5084 | 1.0435 | 18 | 1.3672 |
| 1.0814 | 1.5217 | 27 | 1.3483 |
| 1.1024 | 2.0435 | 36 | 1.3204 |
| 0.6554 | 2.5217 | 45 | 1.4286 |
| 0.6163 | 3.0435 | 54 | 1.4375 |
| 0.5058 | 3.5072 | 63 | 1.4908 |
Training hyperparemeters:
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
Base model: https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T | {"language": ["en"], "library_name": "transformers", "datasets": ["mrfakename/refusal"], "pipeline_tag": "text-generation", "base_model": "TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T"} | mrfakename/refusal-old | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"dataset:mrfakename/refusal",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T22:46:41+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #conversational #en #dataset-mrfakename/refusal #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| I messed up on this model and forgot to add special tokens for <|im\_start|> & <|im\_end|> - but it still seems to work fine. If you're having any issues let me know in the Discussions tab and I can retrain it.
**This model is usable, but you can get better results with the fixed latest model!** Check out the [latest model](URL style=) instead of using this one!
---
A tiny 1B model that refuses basically anything you ask it! Trained on the refusal dataset. Prompt format is ChatML.
Training results:
Training hyperparemeters:
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 10
* num\_epochs: 4
Base model: URL
| [] | [
"TAGS\n#transformers #pytorch #llama #text-generation #conversational #en #dataset-mrfakename/refusal #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | null | An attempt to make ParasiticRogue's model a tad better on longer contexts. I just ran the script, all credit for the original merge goes to my friend.
Exl2 quants already being uploaded by him:
https://huggingface.co/ParasiticRogue/RP-Stew-v2.5-34B-exl2-4.65
Also, my samplers, instruct and prompt for the model (works wonderfully):
Samplers: https://files.catbox.moe/1yhmve.json
Instruct: https://files.catbox.moe/mp4s7h.json
Story String: https://files.catbox.moe/98xlsb.json

```
models:
- model: F:\Merge\ParasiticRogue_Nontoxic-PiVoT-Bagel-RP-34b
parameters:
weight: 0.16
density: 0.42
- model: F:\Merge\ParasiticRogue_Nyakura-CausalLM-RP-34B
parameters:
weight: 0.22
density: 0.54
- model: F:\Merge\migtissera_Tess-34B-v1.5b
parameters:
weight: 0.28
density: 0.66
- model: F:\Merge\brucethemoose_Capybara-Fixed-Temp
parameters:
weight: 0.34
density: 0.78
merge_method: dare_ties
base_model: F:\Merge\chargoddard_Yi-34B-200K-Llama
parameters:
int8_mask: true
dtype: bfloat16
``` | {} | MarinaraSpaghetti/RP-Stew-v2.5-34B | null | [
"safetensors",
"region:us"
] | null | 2024-04-23T22:50:35+00:00 | [] | [] | TAGS
#safetensors #region-us
| An attempt to make ParasiticRogue's model a tad better on longer contexts. I just ran the script, all credit for the original merge goes to my friend.
Exl2 quants already being uploaded by him:
URL
Also, my samplers, instruct and prompt for the model (works wonderfully):
Samplers: URL
Instruct: URL
Story String: URL
!image/png
| [] | [
"TAGS\n#safetensors #region-us \n"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image-captioning-output
This model is a fine-tuned version of [nlpconnect/vit-gpt2-image-captioning](https://huggingface.co/nlpconnect/vit-gpt2-image-captioning) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5164
- Rouge1: 35.5267
- Rouge2: 12.254
- Rougel: 32.968
- Rougelsum: 32.9723
- Gen Len: 12.395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.5193 | 0.25 | 500 | 0.5171 | 33.0319 | 10.364 | 30.6939 | 30.6888 | 12.1 |
| 0.4842 | 0.5 | 1000 | 0.5102 | 33.7318 | 10.8199 | 31.1842 | 31.18 | 11.3 |
| 0.4724 | 0.75 | 1500 | 0.5028 | 34.6981 | 11.4074 | 31.9128 | 31.9158 | 12.02 |
| 0.4632 | 1.0 | 2000 | 0.5012 | 35.9443 | 12.8742 | 33.4061 | 33.377 | 11.04 |
| 0.377 | 1.25 | 2500 | 0.5026 | 35.7745 | 12.2309 | 33.3234 | 33.3353 | 11.735 |
| 0.3819 | 1.5 | 3000 | 0.5018 | 36.0145 | 13.0296 | 33.5985 | 33.6182 | 12.285 |
| 0.3788 | 1.75 | 3500 | 0.5030 | 35.9016 | 12.5276 | 33.4995 | 33.5033 | 11.305 |
| 0.3654 | 2.0 | 4000 | 0.5020 | 36.2476 | 12.945 | 33.6453 | 33.6595 | 11.9 |
| 0.3102 | 2.25 | 4500 | 0.5146 | 36.1507 | 13.0072 | 33.3889 | 33.3786 | 12.305 |
| 0.3137 | 2.5 | 5000 | 0.5166 | 35.7413 | 12.5693 | 33.2646 | 33.2508 | 12.71 |
| 0.3111 | 2.75 | 5500 | 0.5171 | 35.5658 | 12.511 | 33.0581 | 33.0518 | 12.55 |
| 0.3023 | 3.0 | 6000 | 0.5164 | 35.5267 | 12.254 | 32.968 | 32.9723 | 12.395 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "nlpconnect/vit-gpt2-image-captioning", "model-index": [{"name": "image-captioning-output", "results": []}]} | NourFakih/image-captioning-output | null | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"base_model:nlpconnect/vit-gpt2-image-captioning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T22:52:24+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #base_model-nlpconnect/vit-gpt2-image-captioning #license-apache-2.0 #endpoints_compatible #region-us
| image-captioning-output
=======================
This model is a fine-tuned version of nlpconnect/vit-gpt2-image-captioning on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5164
* Rouge1: 35.5267
* Rouge2: 12.254
* Rougel: 32.968
* Rougelsum: 32.9723
* Gen Len: 12.395
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #base_model-nlpconnect/vit-gpt2-image-captioning #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
# BERT-based Organization Detection Model for Turkish Texts
## Model Description
This model is fine-tuned on the `dbmdz/bert-base-turkish-uncased` architecture for detecting organization accounts within Turkish Twitter. This initiative is part of the Politus Project's efforts to analyze organizational presence in social media data.
## Model Architecture
- **Base Model:** BERT (dbmdz/bert-base-turkish-uncased)
- **Training Data:** Twitter data from 4,000 random accounts and 4,000 accounts with high organization-related activity as determined by m3inference scores above 0.7, 8,000 accounts in total. The data was annotated based on user names, screen names, and descriptions using ChatGPT 4.
## Training Setup
- **Tokenization:** Used Hugging Face's AutoTokenizer, padding sequences to a maximum length of 128 tokens.
- **Dataset Split:** 80% training, 20% validation.
- **Training Parameters:**
- Epochs: 3
- Training batch size: 8
- Evaluation batch size: 16
- Warmup steps: 500
- Weight decay: 0.01
## Hyperparameter Tuning
Performed using Optuna, achieving best settings:
- **Learning rate:** 1.84e-05
- **Batch size:** 16
- **Epochs:** 3
## Evaluation Metrics
- **Precision on Validation Set:** 0.67 (organization class)
- **Recall on Validation Set:** 0.81 (organization class)
- **F1-Score (Macro Average):** 0.73
- **Accuracy:** 0.94
- **Confusion Matrix on Validation Set:**
```
[[1390, 60],
[ 28, 122]]
```
- **Hand-coded Sample of 100 Accounts:**
- **Precision:** 0.89
- **Recall:** 0.89
- **F1-Score (Macro Average):** 0.89
- **Confusion Matrix:**
```
[[935, 4],
[ 4, 31]]
```
## How to Use
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("atsizelti/turkish_org_classifier")
tokenizer = AutoTokenizer.from_pretrained("atsizelti/turkish_org_classifier")
text = "Örnek metin buraya girilir."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
predictions = outputs.logits.argmax(-1)
```
| {"language": "tr", "license": "apache-2.0", "tags": ["bert", "turkish", "text-classification"], "datasets": ["custom"], "metrics": ["precision", "recall", "f1", "accuracy"]} | atsizelti/turkish_org_classifier | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"turkish",
"tr",
"dataset:custom",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T23:00:16+00:00 | [] | [
"tr"
] | TAGS
#transformers #safetensors #bert #text-classification #turkish #tr #dataset-custom #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BERT-based Organization Detection Model for Turkish Texts
## Model Description
This model is fine-tuned on the 'dbmdz/bert-base-turkish-uncased' architecture for detecting organization accounts within Turkish Twitter. This initiative is part of the Politus Project's efforts to analyze organizational presence in social media data.
## Model Architecture
- Base Model: BERT (dbmdz/bert-base-turkish-uncased)
- Training Data: Twitter data from 4,000 random accounts and 4,000 accounts with high organization-related activity as determined by m3inference scores above 0.7, 8,000 accounts in total. The data was annotated based on user names, screen names, and descriptions using ChatGPT 4.
## Training Setup
- Tokenization: Used Hugging Face's AutoTokenizer, padding sequences to a maximum length of 128 tokens.
- Dataset Split: 80% training, 20% validation.
- Training Parameters:
- Epochs: 3
- Training batch size: 8
- Evaluation batch size: 16
- Warmup steps: 500
- Weight decay: 0.01
## Hyperparameter Tuning
Performed using Optuna, achieving best settings:
- Learning rate: 1.84e-05
- Batch size: 16
- Epochs: 3
## Evaluation Metrics
- Precision on Validation Set: 0.67 (organization class)
- Recall on Validation Set: 0.81 (organization class)
- F1-Score (Macro Average): 0.73
- Accuracy: 0.94
- Confusion Matrix on Validation Set:
- Hand-coded Sample of 100 Accounts:
- Precision: 0.89
- Recall: 0.89
- F1-Score (Macro Average): 0.89
- Confusion Matrix:
## How to Use
| [
"# BERT-based Organization Detection Model for Turkish Texts",
"## Model Description\n\nThis model is fine-tuned on the 'dbmdz/bert-base-turkish-uncased' architecture for detecting organization accounts within Turkish Twitter. This initiative is part of the Politus Project's efforts to analyze organizational presence in social media data.",
"## Model Architecture\n\n- Base Model: BERT (dbmdz/bert-base-turkish-uncased)\n- Training Data: Twitter data from 4,000 random accounts and 4,000 accounts with high organization-related activity as determined by m3inference scores above 0.7, 8,000 accounts in total. The data was annotated based on user names, screen names, and descriptions using ChatGPT 4.",
"## Training Setup\n\n- Tokenization: Used Hugging Face's AutoTokenizer, padding sequences to a maximum length of 128 tokens.\n- Dataset Split: 80% training, 20% validation.\n- Training Parameters: \n - Epochs: 3\n - Training batch size: 8\n - Evaluation batch size: 16\n - Warmup steps: 500\n - Weight decay: 0.01",
"## Hyperparameter Tuning\n\nPerformed using Optuna, achieving best settings:\n- Learning rate: 1.84e-05\n- Batch size: 16\n- Epochs: 3",
"## Evaluation Metrics\n\n- Precision on Validation Set: 0.67 (organization class)\n- Recall on Validation Set: 0.81 (organization class)\n- F1-Score (Macro Average): 0.73\n- Accuracy: 0.94\n- Confusion Matrix on Validation Set:\n \n\n- Hand-coded Sample of 100 Accounts:\n- Precision: 0.89\n- Recall: 0.89\n- F1-Score (Macro Average): 0.89\n- Confusion Matrix:",
"## How to Use"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #turkish #tr #dataset-custom #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT-based Organization Detection Model for Turkish Texts",
"## Model Description\n\nThis model is fine-tuned on the 'dbmdz/bert-base-turkish-uncased' architecture for detecting organization accounts within Turkish Twitter. This initiative is part of the Politus Project's efforts to analyze organizational presence in social media data.",
"## Model Architecture\n\n- Base Model: BERT (dbmdz/bert-base-turkish-uncased)\n- Training Data: Twitter data from 4,000 random accounts and 4,000 accounts with high organization-related activity as determined by m3inference scores above 0.7, 8,000 accounts in total. The data was annotated based on user names, screen names, and descriptions using ChatGPT 4.",
"## Training Setup\n\n- Tokenization: Used Hugging Face's AutoTokenizer, padding sequences to a maximum length of 128 tokens.\n- Dataset Split: 80% training, 20% validation.\n- Training Parameters: \n - Epochs: 3\n - Training batch size: 8\n - Evaluation batch size: 16\n - Warmup steps: 500\n - Weight decay: 0.01",
"## Hyperparameter Tuning\n\nPerformed using Optuna, achieving best settings:\n- Learning rate: 1.84e-05\n- Batch size: 16\n- Epochs: 3",
"## Evaluation Metrics\n\n- Precision on Validation Set: 0.67 (organization class)\n- Recall on Validation Set: 0.81 (organization class)\n- F1-Score (Macro Average): 0.73\n- Accuracy: 0.94\n- Confusion Matrix on Validation Set:\n \n\n- Hand-coded Sample of 100 Accounts:\n- Precision: 0.89\n- Recall: 0.89\n- F1-Score (Macro Average): 0.89\n- Confusion Matrix:",
"## How to Use"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.01_ablation_5iters_bs256_nodpo_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.01_ablation_5iters_bs256_nodpo_iter_1", "results": []}]} | ShenaoZ/0.01_ablation_5iters_bs256_nodpo_iter_1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T23:01:55+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.01_ablation_5iters_bs256_nodpo_iter_1
This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# 0.01_ablation_5iters_bs256_nodpo_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.01_ablation_5iters_bs256_nodpo_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_ablation_5iters_bs256_nodpo_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.001_ablation_5iters_bs256_nodpo_iter_1", "results": []}]} | ShenaoZ/0.001_ablation_5iters_bs256_nodpo_iter_1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T23:01:56+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.001_ablation_5iters_bs256_nodpo_iter_1
This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# 0.001_ablation_5iters_bs256_nodpo_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.001_ablation_5iters_bs256_nodpo_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="cmattoon/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | cmattoon/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-23T23:03:54+00:00 | [] | [] | TAGS
#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
| [
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] | [
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
text-generation | transformers | # jeiku/Average_Normie_l3_v0_8B AWQ
- Model creator: [jeiku](https://huggingface.co/jeiku)
- Original model: [Average_Normie_l3_v0_8B](https://huggingface.co/jeiku/Average_Normie_l3_v0_8B)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/Average_Normie_l3_v0_8B-AWQ"
system_message = "You are Average_Normie_l3_v0_8B, incarnated as a powerful AI. You were created by jeiku."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
| {"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/Average_Normie_l3_v0_8B-AWQ | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"conversational",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T23:06:05+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #text-generation-inference #region-us
| # jeiku/Average_Normie_l3_v0_8B AWQ
- Model creator: jeiku
- Original model: Average_Normie_l3_v0_8B
## How to use
### Install the necessary packages
### Example Python code
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- Text Generation Webui - using Loader: AutoAWQ
- vLLM - version 0.2.2 or later for support for all model types.
- Hugging Face Text Generation Inference (TGI)
- Transformers version 4.35.0 and later, from any code or client that supports Transformers
- AutoAWQ - for use from Python code
| [
"# jeiku/Average_Normie_l3_v0_8B AWQ\n\n- Model creator: jeiku\n- Original model: Average_Normie_l3_v0_8B",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #text-generation-inference #region-us \n",
"# jeiku/Average_Normie_l3_v0_8B AWQ\n\n- Model creator: jeiku\n- Original model: Average_Normie_l3_v0_8B",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cmattoon/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]} | cmattoon/Taxi-v3 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-23T23:08:01+00:00 | [] | [] | TAGS
#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 Taxi-v3
This is a trained model of a Q-Learning agent playing Taxi-v3 .
## Usage
| [
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] | [
"TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_large_1.1
This model is a fine-tuned version of [facebook/wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1063
- Wer: 1.0
- Cer: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:---:|:------:|
| 3.8007 | 1.12 | 700 | 3.0604 | 1.0 | 0.9997 |
| 3.3783 | 2.24 | 1400 | 3.2217 | 1.0 | 0.9997 |
| 3.2938 | 3.36 | 2100 | 3.1471 | 1.0 | 0.9997 |
| 3.2049 | 4.48 | 2800 | 3.1063 | 1.0 | 0.9997 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "facebook/wav2vec2-large", "model-index": [{"name": "wav2vec2_large_1.1", "results": []}]} | Myriam123/wav2vec2_large_1.1 | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T23:09:04+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-large #license-apache-2.0 #endpoints_compatible #region-us
| wav2vec2\_large\_1.1
====================
This model is a fine-tuned version of facebook/wav2vec2-large on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 3.1063
* Wer: 1.0
* Cer: 0.9997
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-large #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-eurosat-albumentations
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3172
- Accuracy: 0.9193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.1594 | 0.9950 | 99 | 0.8767 | 0.7523 |
| 0.536 | 2.0 | 199 | 0.3970 | 0.8974 |
| 0.3935 | 2.9849 | 297 | 0.3172 | 0.9193 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "facebook/convnext-tiny-224", "model-index": [{"name": "convnext-tiny-224-finetuned-eurosat-albumentations", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9193205944798302, "name": "Accuracy"}]}]}]} | samayl24/convnext-tiny-224-finetuned-eurosat-albumentations | null | [
"transformers",
"safetensors",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/convnext-tiny-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T23:13:23+00:00 | [] | [] | TAGS
#transformers #safetensors #convnext #image-classification #generated_from_trainer #dataset-imagefolder #base_model-facebook/convnext-tiny-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| convnext-tiny-224-finetuned-eurosat-albumentations
==================================================
This model is a fine-tuned version of facebook/convnext-tiny-224 on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3172
* Accuracy: 0.9193
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.3.0+cu118
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #convnext #image-classification #generated_from_trainer #dataset-imagefolder #base_model-facebook/convnext-tiny-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
reinforcement-learning | stable-baselines3 |
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "A2C", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "PandaReachDense-v3", "type": "PandaReachDense-v3"}, "metrics": [{"type": "mean_reward", "value": "-0.20 +/- 0.07", "name": "mean_reward", "verified": false}]}]}]} | ahforoughi/a2c-PandaReachDense-v3 | null | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-23T23:15:32+00:00 | [] | [] | TAGS
#stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# A2C Agent playing PandaReachDense-v3
This is a trained model of a A2C agent playing PandaReachDense-v3
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | juewang/Meta-Llama-3-8B-wo-gqa | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T23:19:54+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: beomi/Llama-3-Open-Ko-8B-Instruct-preview
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: sosoai/mixed_dataset
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.05
output_dir: ./out
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 100
eval_sample_packing: False
evals_per_epoch: 2
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
# out
This model is a fine-tuned version of [beomi/Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9558 | 0.06 | 1 | 2.0462 |
| 1.8918 | 0.52 | 9 | 1.9380 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.1.1
- Datasets 2.15.0
- Tokenizers 0.15.0
| {"license": "other", "tags": ["generated_from_trainer"], "base_model": "beomi/Llama-3-Open-Ko-8B-Instruct-preview", "model-index": [{"name": "out", "results": []}]} | sosoai/hansoldeco-beomi-Llama-3-Open-Ko-8B-Instruct-preview-pytorch-v0.1 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:beomi/Llama-3-Open-Ko-8B-Instruct-preview",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T23:20:55+00:00 | [] | [] | TAGS
#transformers #pytorch #llama #text-generation #generated_from_trainer #conversational #base_model-beomi/Llama-3-Open-Ko-8B-Instruct-preview #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| <img src="URL alt="Built with Axolotl" width="200" height="32"/>
See axolotl config
axolotl version: '0.4.0'
out
===
This model is a fine-tuned version of beomi/Llama-3-Open-Ko-8B-Instruct-preview on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9380
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.40.0.dev0
* Pytorch 2.1.1
* Datasets 2.15.0
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.1.1\n* Datasets 2.15.0\n* Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #pytorch #llama #text-generation #generated_from_trainer #conversational #base_model-beomi/Llama-3-Open-Ko-8B-Instruct-preview #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.1.1\n* Datasets 2.15.0\n* Tokenizers 0.15.0"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_finetuned_ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0013
- Precision: 0.9420
- Recall: 0.9553
- F1: 0.9486
- Accuracy: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 341 | 0.0032 | 0.8657 | 0.8682 | 0.8670 | 0.9993 |
| 0.0338 | 2.0 | 682 | 0.0017 | 0.9366 | 0.9372 | 0.9369 | 0.9997 |
| 0.0021 | 3.0 | 1023 | 0.0013 | 0.9420 | 0.9553 | 0.9486 | 0.9997 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased_finetuned_ner", "results": []}]} | Cerastes/distilbert-base-uncased_finetuned_ner | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T23:25:08+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased\_finetuned\_ner
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0013
* Precision: 0.9420
* Recall: 0.9553
* F1: 0.9486
* Accuracy: 0.9997
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 20
* eval\_batch\_size: 20
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 20\n* eval\\_batch\\_size: 20\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 20\n* eval\\_batch\\_size: 20\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
feature-extraction | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | dayoon/e5_new_loss_epoch1 | null | [
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T23:27:25+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #xlm-roberta #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #xlm-roberta #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image | null | # Hyper-SD - Onnx Olive DirectML Optimized
## Original Model
https://huggingface.co/ByteDance/Hyper-SD
## C# Inference Demo
https://github.com/saddam213/OnnxStack
```csharp
// Create Pipeline
var pipeline = StableDiffusionXLPipeline.CreatePipeline("D:\\Models\\Hyper-SD-onnx");
// Prompt
var promptOptions = new PromptOptions
{
Prompt = "photo of a cat"
};
// Scheduler Options
var schedulerOptions = pipeline.DefaultSchedulerOptions with
{
InferenceSteps = 1,
GuidanceScale = 0,
SchedulerType = SchedulerType.DDIM,
Timesteps = new List<int> { 800 }
};
// Run pipeline
var result = await pipeline.GenerateImageAsync(promptOptions, schedulerOptions);
// Save Image Result
await result.SaveAsync("Result.png");
```
## Inference Result
 | {"pipeline_tag": "text-to-image"} | saddam213/Hyper-SD-onnx | null | [
"onnx",
"text-to-image",
"region:us"
] | null | 2024-04-23T23:27:52+00:00 | [] | [] | TAGS
#onnx #text-to-image #region-us
| # Hyper-SD - Onnx Olive DirectML Optimized
## Original Model
URL
## C# Inference Demo
URL
## Inference Result
!Intro Image | [
"# Hyper-SD - Onnx Olive DirectML Optimized",
"## Original Model\nURL",
"## C# Inference Demo\nURL",
"## Inference Result\n!Intro Image"
] | [
"TAGS\n#onnx #text-to-image #region-us \n",
"# Hyper-SD - Onnx Olive DirectML Optimized",
"## Original Model\nURL",
"## C# Inference Demo\nURL",
"## Inference Result\n!Intro Image"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [andreasmadsen/efficient_mlm_m0.40](https://huggingface.co/andreasmadsen/efficient_mlm_m0.40) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0610
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1265 | 1.0 | 1332 | 0.0689 |
| 0.0717 | 2.0 | 2664 | 0.0622 |
| 0.0663 | 3.0 | 3996 | 0.0610 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "datasets": ["eli5_category"], "base_model": "andreasmadsen/efficient_mlm_m0.40", "model-index": [{"name": "my_awesome_eli5_clm-model", "results": []}]} | CornCube/my_awesome_eli5_clm-model | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta-prelayernorm",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:andreasmadsen/efficient_mlm_m0.40",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T23:28:26+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #roberta-prelayernorm #text-generation #generated_from_trainer #dataset-eli5_category #base_model-andreasmadsen/efficient_mlm_m0.40 #autotrain_compatible #endpoints_compatible #region-us
| my\_awesome\_eli5\_clm-model
============================
This model is a fine-tuned version of andreasmadsen/efficient\_mlm\_m0.40 on the eli5\_category dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0610
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta-prelayernorm #text-generation #generated_from_trainer #dataset-eli5_category #base_model-andreasmadsen/efficient_mlm_m0.40 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** 1024m
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
- **Part of EXALT shared task submission : fine-tuned on train data - v001**
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | 1024m/LLAMA3-01-EXALT-1A-16bit | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T23:28:33+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: 1024m
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
- Part of EXALT shared task submission : fine-tuned on train data - v001
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: 1024m\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\n- Part of EXALT shared task submission : fine-tuned on train data - v001\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: 1024m\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\n- Part of EXALT shared task submission : fine-tuned on train data - v001\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "diffusers"} | rubbrband/yacam_SR36476 | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-23T23:28:43+00:00 | [
"1910.09700"
] | [] | TAGS
#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
## This repo contains GGUF versions of the microsoft/Phi-3-mini-4k-instruct model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/Phi-3-mini-4k-instruct-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download PrunaAI/Phi-3-mini-4k-instruct-GGUF-smashed Phi-3-mini-4k-instruct.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download PrunaAI/Phi-3-mini-4k-instruct-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/Phi-3-mini-4k-instruct-GGUF-smashed Phi-3-mini-4k-instruct.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Phi-3-mini-4k-instruct.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Phi-3-mini-4k-instruct.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Phi-3-mini-4k-instruct.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
| {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/Phi-3-mini-4k-instruct-GGUF-Imatrix-smashed | null | [
"gguf",
"pruna-ai",
"region:us"
] | null | 2024-04-23T23:31:12+00:00 | [] | [] | TAGS
#gguf #pruna-ai #region-us
|
[](URL target=)
:
* Step 1: We recommend using the 'huggingface-hub' Python library:
* Step 2: Then you can download any individual model file to the current directory, at high speed, with a command like this:
More advanced huggingface-cli download usage (click to read)
Alternatively, you can also download multiple files at once with a pattern:
For more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI.
To accelerate downloads on fast connections (1Gbit/s or higher), install 'hf\_transfer':
And set environment variable 'HF\_HUB\_ENABLE\_HF\_TRANSFER' to '1':
Windows Command Line users: You can set the environment variable by running 'set HF\_HUB\_ENABLE\_HF\_TRANSFER=1' before the download command.
How to run model in GGUF format?
--------------------------------
* Option A - Introductory example with 'URL' command
Make sure you are using 'URL' from commit d0cee0d or later.
Change '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change '-c 32768' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the '-p ' argument with '-i -ins'
For other parameters and how to use them, please refer to the URL documentation
* Option B - Running in 'text-generation-webui'
Further instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL.
* Option C - Running from Python code
You can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
```
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: llama-cpp-python docs.
#### First install the package
Run one of the following commands, according to your system:
#### Simple llama-cpp-python example code
```
* Option D - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* LangChain + llama-cpp-python
* LangChain + ctransformers
Configurations
--------------
The configuration info are in 'smash\_config.json'.
Credits & License
-----------------
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
Want to compress other models?
------------------------------
* Contact us and tell us which model to compress next here.
* Request access to easily compress your own AI models here.
| [
"### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.",
"#### First install the package\n\nRun one of the following commands, according to your system:",
"#### Simple llama-cpp-python example code\n\n```\n\n* Option D - Running with LangChain\n\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers\n\n\nConfigurations\n--------------\n\n\nThe configuration info are in 'smash\\_config.json'.\n\n\nCredits & License\n-----------------\n\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.\n\n\nWant to compress other models?\n------------------------------\n\n\n* Contact us and tell us which model to compress next here.\n* Request access to easily compress your own AI models here."
] | [
"TAGS\n#gguf #pruna-ai #region-us \n",
"### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.",
"#### First install the package\n\nRun one of the following commands, according to your system:",
"#### Simple llama-cpp-python example code\n\n```\n\n* Option D - Running with LangChain\n\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers\n\n\nConfigurations\n--------------\n\n\nThe configuration info are in 'smash\\_config.json'.\n\n\nCredits & License\n-----------------\n\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.\n\n\nWant to compress other models?\n------------------------------\n\n\n* Contact us and tell us which model to compress next here.\n* Request access to easily compress your own AI models here."
] |
null | null | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.com/invite/vb6SmA3hxu)
## This repo contains GGUF versions of the microsoft/Phi-3-mini-128k-instruct model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.com/invite/vb6SmA3hxu) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/Phi-3-mini-128k-instruct-GGUF-Imatrix-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download PrunaAI/Phi-3-mini-128k-instruct-GGUF-Imatrix-smashed Phi-3-mini-128k-instruct.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download PrunaAI/Phi-3-mini-128k-instruct-GGUF-Imatrix-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/Phi-3-mini-128k-instruct-GGUF-Imatrix-smashed Phi-3-mini-128k-instruct.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Phi-3-mini-128k-instruct.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Phi-3-mini-128k-instruct.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Phi-3-mini-128k-instruct.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
| {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/Phi-3-mini-128k-instruct-GGUF-Imatrix-smashed | null | [
"gguf",
"pruna-ai",
"region:us"
] | null | 2024-04-23T23:31:13+00:00 | [] | [] | TAGS
#gguf #pruna-ai #region-us
|
[](URL target=)
:
* Step 1: We recommend using the 'huggingface-hub' Python library:
* Step 2: Then you can download any individual model file to the current directory, at high speed, with a command like this:
More advanced huggingface-cli download usage (click to read)
Alternatively, you can also download multiple files at once with a pattern:
For more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI.
To accelerate downloads on fast connections (1Gbit/s or higher), install 'hf\_transfer':
And set environment variable 'HF\_HUB\_ENABLE\_HF\_TRANSFER' to '1':
Windows Command Line users: You can set the environment variable by running 'set HF\_HUB\_ENABLE\_HF\_TRANSFER=1' before the download command.
How to run model in GGUF format?
--------------------------------
* Option A - Introductory example with 'URL' command
Make sure you are using 'URL' from commit d0cee0d or later.
Change '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change '-c 32768' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the '-p ' argument with '-i -ins'
For other parameters and how to use them, please refer to the URL documentation
* Option B - Running in 'text-generation-webui'
Further instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL.
* Option C - Running from Python code
You can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
```
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: llama-cpp-python docs.
#### First install the package
Run one of the following commands, according to your system:
#### Simple llama-cpp-python example code
```
* Option D - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* LangChain + llama-cpp-python
* LangChain + ctransformers
Configurations
--------------
The configuration info are in 'smash\_config.json'.
Credits & License
-----------------
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
Want to compress other models?
------------------------------
* Contact us and tell us which model to compress next here.
* Request access to easily compress your own AI models here.
| [
"### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.",
"#### First install the package\n\nRun one of the following commands, according to your system:",
"#### Simple llama-cpp-python example code\n\n```\n\n* Option D - Running with LangChain\n\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers\n\n\nConfigurations\n--------------\n\n\nThe configuration info are in 'smash\\_config.json'.\n\n\nCredits & License\n-----------------\n\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.\n\n\nWant to compress other models?\n------------------------------\n\n\n* Contact us and tell us which model to compress next here.\n* Request access to easily compress your own AI models here."
] | [
"TAGS\n#gguf #pruna-ai #region-us \n",
"### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.",
"#### First install the package\n\nRun one of the following commands, according to your system:",
"#### Simple llama-cpp-python example code\n\n```\n\n* Option D - Running with LangChain\n\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers\n\n\nConfigurations\n--------------\n\n\nThe configuration info are in 'smash\\_config.json'.\n\n\nCredits & License\n-----------------\n\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.\n\n\nWant to compress other models?\n------------------------------\n\n\n* Contact us and tell us which model to compress next here.\n* Request access to easily compress your own AI models here."
] |
reinforcement-learning | transformers |
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="PranavBP525//tmp/tmp44w1s9gw/PranavBP525/phi-2-storygen-rlhf")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("PranavBP525//tmp/tmp44w1s9gw/PranavBP525/phi-2-storygen-rlhf")
model = AutoModelForCausalLMWithValueHead.from_pretrained("PranavBP525//tmp/tmp44w1s9gw/PranavBP525/phi-2-storygen-rlhf")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
| {"license": "apache-2.0", "tags": ["trl", "ppo", "transformers", "reinforcement-learning"]} | PranavBP525/phi-2-storygen-rlhf | null | [
"transformers",
"pytorch",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T23:31:37+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #trl #ppo #reinforcement-learning #license-apache-2.0 #endpoints_compatible #region-us
|
# TRL Model
This is a TRL language model that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
You can then generate text as follows:
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
| [
"# TRL Model\n\nThis is a TRL language model that has been fine-tuned with reinforcement learning to\n guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.",
"## Usage\n\nTo use this model for inference, first install the TRL library:\n\n\n\nYou can then generate text as follows:\n\n\n\nIf you want to use the model for training or to obtain the outputs from the value head, load the model as follows:"
] | [
"TAGS\n#transformers #pytorch #safetensors #trl #ppo #reinforcement-learning #license-apache-2.0 #endpoints_compatible #region-us \n",
"# TRL Model\n\nThis is a TRL language model that has been fine-tuned with reinforcement learning to\n guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.",
"## Usage\n\nTo use this model for inference, first install the TRL library:\n\n\n\nYou can then generate text as follows:\n\n\n\nIf you want to use the model for training or to obtain the outputs from the value head, load the model as follows:"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model_gpt2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6919 | 1.0 | 1333 | 3.5809 |
| 3.5769 | 2.0 | 2666 | 3.5744 |
| 3.5275 | 3.0 | 3999 | 3.5744 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["eli5_category"], "base_model": "gpt2", "model-index": [{"name": "my_awesome_eli5_clm-model_gpt2", "results": []}]} | mkim-MASI/my_awesome_eli5_clm-model_gpt2 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T23:31:55+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #dataset-eli5_category #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| my\_awesome\_eli5\_clm-model\_gpt2
==================================
This model is a fine-tuned version of gpt2 on the eli5\_category dataset.
It achieves the following results on the evaluation set:
* Loss: 3.5744
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #dataset-eli5_category #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# ✨ Falcon-7B-Instruct
**Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.**
*Paper coming soon 😊.*
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-7B-Instruct?
* **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).**
* **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
🔥 **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct.
# Model Card for Falcon-7B-Instruct
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0;
- **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.
| **Data source** | **Fraction** | **Tokens** | **Description** |
|--------------------|--------------|------------|-----------------------------------|
| [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat |
| [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct |
| [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct |
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
## Evaluation
*Paper coming soon.*
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
Note that this model variant is not optimized for NLP benchmarks.
## Technical Specifications
For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## License
Falcon-7B-Instruct is made available under the Apache 2.0 license.
## Contact
[email protected] | {"language": ["en"], "license": "apache-2.0", "datasets": ["tiiuae/falcon-refinedweb"], "inference": true, "widget": [{"text": "Hey Falcon! Any recommendations for my holidays in Abu Dhabi?", "example_title": "Abu Dhabi Trip"}, {"text": "What's the Everett interpretation of quantum mechanics?", "example_title": "Q/A: Quantum & Answers"}, {"text": "Give me a list of the top 10 dive sites you would recommend around the world.", "example_title": "Diving Top 10"}, {"text": "Can you tell me more about deep-water soloing?", "example_title": "Extreme sports"}, {"text": "Can you write a short tweet about the Apache 2.0 release of our latest AI model, Falcon LLM?", "example_title": "Twitter Helper"}, {"text": "What are the responsabilities of a Chief Llama Officer?", "example_title": "Trendy Jobs"}]} | titanbot/ct2-int8-falcon-7b-instruct | null | [
"transformers",
"RefinedWebModel",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T23:32:36+00:00 | [
"2205.14135",
"1911.02150",
"2005.14165",
"2104.09864",
"2306.01116"
] | [
"en"
] | TAGS
#transformers #RefinedWebModel #text-generation #custom_code #en #dataset-tiiuae/falcon-refinedweb #arxiv-2205.14135 #arxiv-1911.02150 #arxiv-2005.14165 #arxiv-2104.09864 #arxiv-2306.01116 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Falcon-7B-Instruct
==================
Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by TII based on Falcon-7B and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.
*Paper coming soon .*
To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading this great blogpost fron HF!
Why use Falcon-7B-Instruct?
---------------------------
* You are looking for a ready-to-use chat/instruct model based on Falcon-7B.
* Falcon-7B is a strong base model, outperforming comparable open-source models (e.g., MPT-7B, StableLM, RedPajama etc.), thanks to being trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. See the OpenLLM Leaderboard.
* It features an architecture optimized for inference, with FlashAttention (Dao et al., 2022) and multiquery (Shazeer et al., 2019).
This is an instruct model, which may not be ideal for further finetuning. If you are interested in building your own instruct/chat model, we recommend starting from Falcon-7B.
Looking for an even more powerful model? Falcon-40B-Instruct is Falcon-7B-Instruct's big brother!
Falcon LLMs require PyTorch 2.0 for use with 'transformers'!
For fast inference with Falcon, check-out Text Generation Inference! Read more in this blogpost.
You will need at least 16GB of memory to swiftly run inference with Falcon-7B-Instruct.
Model Card for Falcon-7B-Instruct
=================================
Model Details
-------------
### Model Description
* Developed by: URL;
* Model type: Causal decoder-only;
* Language(s) (NLP): English and French;
* License: Apache 2.0;
* Finetuned from model: Falcon-7B.
### Model Source
* Paper: *coming soon*.
Uses
----
### Direct Use
Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
Bias, Risks, and Limitations
----------------------------
Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.
How to Get Started with the Model
---------------------------------
Training Details
----------------
### Training Data
Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.
The data was tokenized with the Falcon-7B/40B tokenizer.
Evaluation
----------
*Paper coming soon.*
See the OpenLLM Leaderboard for early results.
Note that this model variant is not optimized for NLP benchmarks.
Technical Specifications
------------------------
For more information about pretraining, see Falcon-7B.
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper (Brown et al., 2020), with the following differences:
* Positionnal embeddings: rotary (Su et al., 2021);
* Attention: multiquery (Shazeer et al., 2019) and FlashAttention (Dao et al., 2022);
* Decoder-block: parallel attention/MLP with a single layer norm.
Hyperparameter: Layers, Value: 32, Comment:
Hyperparameter: 'd\_model', Value: 4544, Comment: Increased to compensate for multiquery
Hyperparameter: 'head\_dim', Value: 64, Comment: Reduced to optimise for FlashAttention
Hyperparameter: Vocabulary, Value: 65024, Comment:
Hyperparameter: Sequence length, Value: 2048, Comment:
### Compute Infrastructure
#### Hardware
Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
*Paper coming soon* . In the meanwhile, you can use the following information to cite:
To learn more about the pretraining dataset, see the RefinedWeb paper.
License
-------
Falcon-7B-Instruct is made available under the Apache 2.0 license.
Contact
-------
falconllm@URL
| [
"### Model Description\n\n\n* Developed by: URL;\n* Model type: Causal decoder-only;\n* Language(s) (NLP): English and French;\n* License: Apache 2.0;\n* Finetuned from model: Falcon-7B.",
"### Model Source\n\n\n* Paper: *coming soon*.\n\n\nUses\n----",
"### Direct Use\n\n\nFalcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.",
"### Out-of-Scope Use\n\n\nProduction use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nFalcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.",
"### Recommendations\n\n\nWe recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.\n\n\nHow to Get Started with the Model\n---------------------------------\n\n\nTraining Details\n----------------",
"### Training Data\n\n\nFalcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.\n\n\n\nThe data was tokenized with the Falcon-7B/40B tokenizer.\n\n\nEvaluation\n----------\n\n\n*Paper coming soon.*\n\n\nSee the OpenLLM Leaderboard for early results.\n\n\nNote that this model variant is not optimized for NLP benchmarks.\n\n\nTechnical Specifications\n------------------------\n\n\nFor more information about pretraining, see Falcon-7B.",
"### Model Architecture and Objective\n\n\nFalcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).\n\n\nThe architecture is broadly adapted from the GPT-3 paper (Brown et al., 2020), with the following differences:\n\n\n* Positionnal embeddings: rotary (Su et al., 2021);\n* Attention: multiquery (Shazeer et al., 2019) and FlashAttention (Dao et al., 2022);\n* Decoder-block: parallel attention/MLP with a single layer norm.\n\n\nHyperparameter: Layers, Value: 32, Comment: \nHyperparameter: 'd\\_model', Value: 4544, Comment: Increased to compensate for multiquery\nHyperparameter: 'head\\_dim', Value: 64, Comment: Reduced to optimise for FlashAttention\nHyperparameter: Vocabulary, Value: 65024, Comment: \nHyperparameter: Sequence length, Value: 2048, Comment:",
"### Compute Infrastructure",
"#### Hardware\n\n\nFalcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.",
"#### Software\n\n\nFalcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)\n\n\n*Paper coming soon* . In the meanwhile, you can use the following information to cite:\n\n\nTo learn more about the pretraining dataset, see the RefinedWeb paper.\n\n\nLicense\n-------\n\n\nFalcon-7B-Instruct is made available under the Apache 2.0 license.\n\n\nContact\n-------\n\n\nfalconllm@URL"
] | [
"TAGS\n#transformers #RefinedWebModel #text-generation #custom_code #en #dataset-tiiuae/falcon-refinedweb #arxiv-2205.14135 #arxiv-1911.02150 #arxiv-2005.14165 #arxiv-2104.09864 #arxiv-2306.01116 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Model Description\n\n\n* Developed by: URL;\n* Model type: Causal decoder-only;\n* Language(s) (NLP): English and French;\n* License: Apache 2.0;\n* Finetuned from model: Falcon-7B.",
"### Model Source\n\n\n* Paper: *coming soon*.\n\n\nUses\n----",
"### Direct Use\n\n\nFalcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.",
"### Out-of-Scope Use\n\n\nProduction use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nFalcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.",
"### Recommendations\n\n\nWe recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.\n\n\nHow to Get Started with the Model\n---------------------------------\n\n\nTraining Details\n----------------",
"### Training Data\n\n\nFalcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.\n\n\n\nThe data was tokenized with the Falcon-7B/40B tokenizer.\n\n\nEvaluation\n----------\n\n\n*Paper coming soon.*\n\n\nSee the OpenLLM Leaderboard for early results.\n\n\nNote that this model variant is not optimized for NLP benchmarks.\n\n\nTechnical Specifications\n------------------------\n\n\nFor more information about pretraining, see Falcon-7B.",
"### Model Architecture and Objective\n\n\nFalcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).\n\n\nThe architecture is broadly adapted from the GPT-3 paper (Brown et al., 2020), with the following differences:\n\n\n* Positionnal embeddings: rotary (Su et al., 2021);\n* Attention: multiquery (Shazeer et al., 2019) and FlashAttention (Dao et al., 2022);\n* Decoder-block: parallel attention/MLP with a single layer norm.\n\n\nHyperparameter: Layers, Value: 32, Comment: \nHyperparameter: 'd\\_model', Value: 4544, Comment: Increased to compensate for multiquery\nHyperparameter: 'head\\_dim', Value: 64, Comment: Reduced to optimise for FlashAttention\nHyperparameter: Vocabulary, Value: 65024, Comment: \nHyperparameter: Sequence length, Value: 2048, Comment:",
"### Compute Infrastructure",
"#### Hardware\n\n\nFalcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.",
"#### Software\n\n\nFalcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)\n\n\n*Paper coming soon* . In the meanwhile, you can use the following information to cite:\n\n\nTo learn more about the pretraining dataset, see the RefinedWeb paper.\n\n\nLicense\n-------\n\n\nFalcon-7B-Instruct is made available under the Apache 2.0 license.\n\n\nContact\n-------\n\n\nfalconllm@URL"
] |
text-generation | transformers |
# nebchi/kor-resume-Orion-14B-Q4_K_M-GGUF
This model was converted to GGUF format from [`nebchi/kor-resume-Orion-14B`](https://huggingface.co/nebchi/kor-resume-Orion-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nebchi/kor-resume-Orion-14B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo nebchi/kor-resume-Orion-14B-Q4_K_M-GGUF --model kor-resume-orion-14b.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo nebchi/kor-resume-Orion-14B-Q4_K_M-GGUF --model kor-resume-orion-14b.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m kor-resume-orion-14b.Q4_K_M.gguf -n 128
```
| {"language": ["ko"], "license": "other", "library_name": "transformers", "tags": ["pytorch", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation"} | nebchi/kor-resume-Orion-14B-Q4_K_M-GGUF | null | [
"transformers",
"gguf",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"ko",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T23:33:03+00:00 | [] | [
"ko"
] | TAGS
#transformers #gguf #pytorch #llama-cpp #gguf-my-repo #text-generation #ko #license-other #endpoints_compatible #region-us
|
# nebchi/kor-resume-Orion-14B-Q4_K_M-GGUF
This model was converted to GGUF format from 'nebchi/kor-resume-Orion-14B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# nebchi/kor-resume-Orion-14B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'nebchi/kor-resume-Orion-14B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #pytorch #llama-cpp #gguf-my-repo #text-generation #ko #license-other #endpoints_compatible #region-us \n",
"# nebchi/kor-resume-Orion-14B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'nebchi/kor-resume-Orion-14B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers | # lodrick-the-lafted/Copus-2x8B AWQ
- Model creator: [lodrick-the-lafted](https://huggingface.co/lodrick-the-lafted)
- Original model: [Copus-2x8B](https://huggingface.co/lodrick-the-lafted/Copus-2x8B)
<img src=https://huggingface.co/lodrick-the-lafted/Copus-2x8B/resolve/main/copus.png>
## Model Summary
MoE'd up:
- [dreamgen/opus-v1.2-llama-3-8b](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b)
- [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)_
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/Copus-2x8B-AWQ"
system_message = "You are Copus-2x8B, incarnated as a powerful AI. You were created by lodrick-the-lafted."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
| {"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/Copus-2x8B-AWQ | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"conversational",
"text-generation-inference",
"region:us"
] | null | 2024-04-23T23:33:13+00:00 | [] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #text-generation-inference #region-us
| # lodrick-the-lafted/Copus-2x8B AWQ
- Model creator: lodrick-the-lafted
- Original model: Copus-2x8B
<img src=URL
## Model Summary
MoE'd up:
- dreamgen/opus-v1.2-llama-3-8b
- NousResearch/Meta-Llama-3-8B-Instruct_
## How to use
### Install the necessary packages
### Example Python code
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- Text Generation Webui - using Loader: AutoAWQ
- vLLM - version 0.2.2 or later for support for all model types.
- Hugging Face Text Generation Inference (TGI)
- Transformers version 4.35.0 and later, from any code or client that supports Transformers
- AutoAWQ - for use from Python code
| [
"# lodrick-the-lafted/Copus-2x8B AWQ\n\n- Model creator: lodrick-the-lafted\n- Original model: Copus-2x8B\n\n<img src=URL",
"## Model Summary\n\nMoE'd up:\n- dreamgen/opus-v1.2-llama-3-8b\n- NousResearch/Meta-Llama-3-8B-Instruct_",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #text-generation-inference #region-us \n",
"# lodrick-the-lafted/Copus-2x8B AWQ\n\n- Model creator: lodrick-the-lafted\n- Original model: Copus-2x8B\n\n<img src=URL",
"## Model Summary\n\nMoE'd up:\n- dreamgen/opus-v1.2-llama-3-8b\n- NousResearch/Meta-Llama-3-8B-Instruct_",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] |
text-generation | transformers |
#### Do not run the inference from Model card it won't work!!
#### Ignore config.json
#### Use this model space instead!!
https://huggingface.co/spaces/Chrisneverdie/SportsRWKV
This model is built on RWKV 6 structure - an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
This is also part of a project to explore the potential of the Small Language Model in the Sports domain.
Our previous projects:
https://github.com/chrischenhub/FirstSportsELM
https://huggingface.co/spaces/Chrisneverdie/SportsDPT
This model is finetuned with QA pairs so a text completion task may result in an error.
Questions unrelated to sports may suffer from poor performance.
It may still provide incorrect information so just take it as a toy domain model.
# SportsRWKV
Created by Chris Zexin Chen
Email for question: [email protected]
As avid sports enthusiasts, we’ve consistently observed a gap in the market for a dedicated
large language model tailored to the sports domain. This research stems from our intrigue
about the potential of a language model that is exclusively trained and fine-tuned on sports-
related data. We aim to assess its performance against generic language models, thus delving
into the unique nuances and demands of the sports industry
This model structure is built by BlinkDL: https://github.com/BlinkDL/RWKV-LM
### Pretrain Data
https://huggingface.co/datasets/Chrisneverdie/SportsRWKV
*fixed_text_document.bin&fixed_text_document.idx ~8.4 Gb/4.5B tokens*
## Pretrain
To replicate our model, you need to use fixed_text_document.bin & fixed_text_document.idx, which is processed and ready to train.
We trained on a 2xH100 80GB node for 5 hrs to get a val loss ~2.305. Once you set up the environment:
For best performance, use python 3.10, torch 2.1.2+cu121 (or latest), cuda 12.3+, latest deepspeed, but keep pytorch-lightning==1.9.5
best performance:
```
pip install torch --upgrade --extra-index-url https://download.pytorch.org/whl/cu121
pip install pytorch-lightning==1.9.5 deepspeed wandb ninja --upgrade
```
First create the model:
```
python train.py --wandb "" --proj_dir "output/"\
--data_file "data/fixed_text_document" --data_type "binidx" --vocab_size 65536 --my_testing "x060"\
--ctx_len 1024 --my_pile_stage 1 --epoch_count 1 --epoch_begin 0 \
--epoch_save 1 --weight_decay 0 --head_size_a 64 \
--num_nodes 1 --micro_bsz 1 --n_layer 12 --n_embd 768 --pre_ffn 0 --head_qk 0 --my_exit_tokens 4534166811 --magic_prime 4427879 \
--lr_init 1e-5 --lr_final 1e-5 --warmup_steps 10 --beta1 0.9 --beta2 0.99 --adam_eps 1e-8 --my_pile_edecay 0 --accelerator cpu --devices 1 --precision bf16 --strategy deepspeed_stage_2 --grad_cp 0
```
Start training:
```
python train.py --load_model "0" --wandb "Test" --proj_dir "output/"
--my_testing "x060" --ctx_len 1024 --my_pile_stage 3 --epoch_count 999999 --epoch_begin 0 --data_file "data/fixed_text_document" --my_exit_tokens 4534166811 --magic_prime 4427879 \
--num_nodes 1 --micro_bsz 12 --n_layer 12 --n_embd 768 --pre_ffn 0 --head_qk 0 --lr_init 6e-4 --lr_final 6e-5 --warmup_steps 10 --beta1 0.9 --beta2 0.99 \
--adam_eps 1e-8 --my_pile_edecay 0 --data_type "binidx" --vocab_size 65536 --weight_decay 0.1 --epoch_save 5 --head_size_a 64 --accelerator gpu --devices 1 \
--precision bf16 --strategy deepspeed_stage_2 --grad_cp 0 --enable_progress_bar True --ds_bucket_mb 200
```
Note: if you are using commercial GPUs, change --grad_cp to 1 and --ds_bucket_mb to 2. Change --devices/--num_nodes if you have multiple GPUs/nodes
The loss should look like this

After you finish the training, the final .pth file will be saved under the output folder
## Fine Tune
We used thousands of GPT4-generated Sports QA pairs to finetune our model - specifics can be found under: https://github.com/chrischenhub/FirstSportsELM/tree/main/finetune
1. Convert TXT to Jsonl files
```python Json2Bin.py```
2. Convert Jsonl to Binidx for fine-tuning
```python make_data.py your_data.jsonl 3 1024```
3. Fine Tune the checkpoint with the following:
Note: put the pretrained .pth file under output
```
python train.py --load_model "0" --wandb "SportsRWKV_ft" --proj_dir "output/" --my_testing "x060" \
--ctx_len 1024 --my_pile_stage 3 --epoch_count 360 --epoch_begin 0 \
--data_file "data/test" --my_exit_tokens 1081350 --magic_prime 1049 \
--num_nodes 1 --micro_bsz 16 --n_layer 12 --n_embd 768 --pre_ffn 0 --head_qk 0 \
--lr_init 3e-3 --lr_final 3e-4 --warmup_steps 10 --beta1 0.9 --beta2 0.99 --adam_eps 1e-8 --my_pile_edecay 0 --data_type "binidx" --vocab_size 65536 \
--weight_decay 0.1 --epoch_save 10 --head_size_a 64 \
--accelerator gpu --devices 1 --precision bf16 --strategy deepspeed_stage_2 --grad_cp 0 --enable_progress_bar True --ds_bucket_mb 2
```
## Ask Your Question!
For inference, use: https://github.com/BlinkDL/ChatRWKV/blob/main/RWKV_v6_demo.py
## Cost
The entire pretrain and finetune process costs around 100 USD. ~50$ in GPU rentals and ~50$ in OpenAI API usage. | {"language": ["en"], "license": "apache-2.0", "tags": ["sports"], "datasets": ["Chrisneverdie/SportsRWKV"], "pipeline_tag": "text-generation"} | Chrisneverdie/SportsRWKV_150m | null | [
"transformers",
"RWKV-6",
"sports",
"text-generation",
"en",
"dataset:Chrisneverdie/SportsRWKV",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T23:35:00+00:00 | [] | [
"en"
] | TAGS
#transformers #RWKV-6 #sports #text-generation #en #dataset-Chrisneverdie/SportsRWKV #license-apache-2.0 #endpoints_compatible #region-us
|
#### Do not run the inference from Model card it won't work!!
#### Ignore URL
#### Use this model space instead!!
URL
This model is built on RWKV 6 structure - an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
This is also part of a project to explore the potential of the Small Language Model in the Sports domain.
Our previous projects:
URL
URL
This model is finetuned with QA pairs so a text completion task may result in an error.
Questions unrelated to sports may suffer from poor performance.
It may still provide incorrect information so just take it as a toy domain model.
# SportsRWKV
Created by Chris Zexin Chen
Email for question: zc2404@URL
As avid sports enthusiasts, we’ve consistently observed a gap in the market for a dedicated
large language model tailored to the sports domain. This research stems from our intrigue
about the potential of a language model that is exclusively trained and fine-tuned on sports-
related data. We aim to assess its performance against generic language models, thus delving
into the unique nuances and demands of the sports industry
This model structure is built by BlinkDL: URL
### Pretrain Data
URL
*fixed_text_document.bin&fixed_text_document.idx ~8.4 Gb/4.5B tokens*
## Pretrain
To replicate our model, you need to use fixed_text_document.bin & fixed_text_document.idx, which is processed and ready to train.
We trained on a 2xH100 80GB node for 5 hrs to get a val loss ~2.305. Once you set up the environment:
For best performance, use python 3.10, torch 2.1.2+cu121 (or latest), cuda 12.3+, latest deepspeed, but keep pytorch-lightning==1.9.5
best performance:
First create the model:
Start training:
Note: if you are using commercial GPUs, change --grad_cp to 1 and --ds_bucket_mb to 2. Change --devices/--num_nodes if you have multiple GPUs/nodes
The loss should look like this
!image/png
After you finish the training, the final .pth file will be saved under the output folder
## Fine Tune
We used thousands of GPT4-generated Sports QA pairs to finetune our model - specifics can be found under: URL
1. Convert TXT to Jsonl files
2. Convert Jsonl to Binidx for fine-tuning
3. Fine Tune the checkpoint with the following:
Note: put the pretrained .pth file under output
## Ask Your Question!
For inference, use: URL
## Cost
The entire pretrain and finetune process costs around 100 USD. ~50$ in GPU rentals and ~50$ in OpenAI API usage. | [
"#### Do not run the inference from Model card it won't work!!",
"#### Ignore URL",
"#### Use this model space instead!!\nURL\n\nThis model is built on RWKV 6 structure - an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, \"infinite\" ctx_len, and free sentence embedding.\n\nThis is also part of a project to explore the potential of the Small Language Model in the Sports domain.\nOur previous projects:\n URL\n URL\n \nThis model is finetuned with QA pairs so a text completion task may result in an error.\nQuestions unrelated to sports may suffer from poor performance.\nIt may still provide incorrect information so just take it as a toy domain model.",
"# SportsRWKV\n Created by Chris Zexin Chen\n \n Email for question: zc2404@URL\n\n\nAs avid sports enthusiasts, we’ve consistently observed a gap in the market for a dedicated\nlarge language model tailored to the sports domain. This research stems from our intrigue\nabout the potential of a language model that is exclusively trained and fine-tuned on sports-\nrelated data. We aim to assess its performance against generic language models, thus delving\ninto the unique nuances and demands of the sports industry\n\nThis model structure is built by BlinkDL: URL",
"### Pretrain Data \nURL\n*fixed_text_document.bin&fixed_text_document.idx ~8.4 Gb/4.5B tokens*",
"## Pretrain\nTo replicate our model, you need to use fixed_text_document.bin & fixed_text_document.idx, which is processed and ready to train.\nWe trained on a 2xH100 80GB node for 5 hrs to get a val loss ~2.305. Once you set up the environment:\n\nFor best performance, use python 3.10, torch 2.1.2+cu121 (or latest), cuda 12.3+, latest deepspeed, but keep pytorch-lightning==1.9.5\nbest performance:\n\nFirst create the model:\n\n\nStart training:\n\nNote: if you are using commercial GPUs, change --grad_cp to 1 and --ds_bucket_mb to 2. Change --devices/--num_nodes if you have multiple GPUs/nodes\n\nThe loss should look like this\n!image/png\n\nAfter you finish the training, the final .pth file will be saved under the output folder",
"## Fine Tune\nWe used thousands of GPT4-generated Sports QA pairs to finetune our model - specifics can be found under: URL\n\n1. Convert TXT to Jsonl files\n \n\n\n2. Convert Jsonl to Binidx for fine-tuning\n\n\n\n3. Fine Tune the checkpoint with the following:\nNote: put the pretrained .pth file under output",
"## Ask Your Question!\nFor inference, use: URL",
"## Cost\nThe entire pretrain and finetune process costs around 100 USD. ~50$ in GPU rentals and ~50$ in OpenAI API usage."
] | [
"TAGS\n#transformers #RWKV-6 #sports #text-generation #en #dataset-Chrisneverdie/SportsRWKV #license-apache-2.0 #endpoints_compatible #region-us \n",
"#### Do not run the inference from Model card it won't work!!",
"#### Ignore URL",
"#### Use this model space instead!!\nURL\n\nThis model is built on RWKV 6 structure - an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, \"infinite\" ctx_len, and free sentence embedding.\n\nThis is also part of a project to explore the potential of the Small Language Model in the Sports domain.\nOur previous projects:\n URL\n URL\n \nThis model is finetuned with QA pairs so a text completion task may result in an error.\nQuestions unrelated to sports may suffer from poor performance.\nIt may still provide incorrect information so just take it as a toy domain model.",
"# SportsRWKV\n Created by Chris Zexin Chen\n \n Email for question: zc2404@URL\n\n\nAs avid sports enthusiasts, we’ve consistently observed a gap in the market for a dedicated\nlarge language model tailored to the sports domain. This research stems from our intrigue\nabout the potential of a language model that is exclusively trained and fine-tuned on sports-\nrelated data. We aim to assess its performance against generic language models, thus delving\ninto the unique nuances and demands of the sports industry\n\nThis model structure is built by BlinkDL: URL",
"### Pretrain Data \nURL\n*fixed_text_document.bin&fixed_text_document.idx ~8.4 Gb/4.5B tokens*",
"## Pretrain\nTo replicate our model, you need to use fixed_text_document.bin & fixed_text_document.idx, which is processed and ready to train.\nWe trained on a 2xH100 80GB node for 5 hrs to get a val loss ~2.305. Once you set up the environment:\n\nFor best performance, use python 3.10, torch 2.1.2+cu121 (or latest), cuda 12.3+, latest deepspeed, but keep pytorch-lightning==1.9.5\nbest performance:\n\nFirst create the model:\n\n\nStart training:\n\nNote: if you are using commercial GPUs, change --grad_cp to 1 and --ds_bucket_mb to 2. Change --devices/--num_nodes if you have multiple GPUs/nodes\n\nThe loss should look like this\n!image/png\n\nAfter you finish the training, the final .pth file will be saved under the output folder",
"## Fine Tune\nWe used thousands of GPT4-generated Sports QA pairs to finetune our model - specifics can be found under: URL\n\n1. Convert TXT to Jsonl files\n \n\n\n2. Convert Jsonl to Binidx for fine-tuning\n\n\n\n3. Fine Tune the checkpoint with the following:\nNote: put the pretrained .pth file under output",
"## Ask Your Question!\nFor inference, use: URL",
"## Cost\nThe entire pretrain and finetune process costs around 100 USD. ~50$ in GPU rentals and ~50$ in OpenAI API usage."
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned__roberta-clinical-wl-es__augmented-ultrasounds-ner
This model is a fine-tuned version of [manucos/finetuned__roberta-clinical-wl-es__augmented-ultrasounds](https://huggingface.co/manucos/finetuned__roberta-clinical-wl-es__augmented-ultrasounds) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3995
- Precision: 0.7932
- Recall: 0.8775
- F1: 0.8333
- Accuracy: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 22 | 1.2788 | 0.5687 | 0.2763 | 0.3719 | 0.6256 |
| No log | 2.0 | 44 | 0.6691 | 0.6975 | 0.7470 | 0.7214 | 0.8576 |
| No log | 3.0 | 66 | 0.4416 | 0.7649 | 0.8168 | 0.7900 | 0.9051 |
| No log | 4.0 | 88 | 0.3715 | 0.7350 | 0.8279 | 0.7787 | 0.9115 |
| No log | 5.0 | 110 | 0.3398 | 0.7658 | 0.8441 | 0.8031 | 0.9221 |
| No log | 6.0 | 132 | 0.3320 | 0.7808 | 0.8472 | 0.8126 | 0.9216 |
| No log | 7.0 | 154 | 0.3306 | 0.7844 | 0.8431 | 0.8127 | 0.9199 |
| No log | 8.0 | 176 | 0.3321 | 0.7778 | 0.8502 | 0.8124 | 0.9199 |
| No log | 9.0 | 198 | 0.3398 | 0.7845 | 0.8512 | 0.8165 | 0.9196 |
| No log | 10.0 | 220 | 0.3445 | 0.7731 | 0.8553 | 0.8121 | 0.9197 |
| No log | 11.0 | 242 | 0.3560 | 0.7804 | 0.8522 | 0.8147 | 0.9196 |
| No log | 12.0 | 264 | 0.3516 | 0.7904 | 0.8664 | 0.8267 | 0.9214 |
| No log | 13.0 | 286 | 0.3553 | 0.7923 | 0.8725 | 0.8304 | 0.9228 |
| No log | 14.0 | 308 | 0.3644 | 0.7896 | 0.8775 | 0.8313 | 0.9223 |
| No log | 15.0 | 330 | 0.3706 | 0.7927 | 0.8745 | 0.8316 | 0.9214 |
| No log | 16.0 | 352 | 0.3763 | 0.7921 | 0.8755 | 0.8317 | 0.9228 |
| No log | 17.0 | 374 | 0.3811 | 0.7869 | 0.8745 | 0.8284 | 0.9228 |
| No log | 18.0 | 396 | 0.3772 | 0.7830 | 0.8765 | 0.8271 | 0.9238 |
| No log | 19.0 | 418 | 0.3888 | 0.7829 | 0.8796 | 0.8284 | 0.9218 |
| No log | 20.0 | 440 | 0.3878 | 0.7900 | 0.8755 | 0.8305 | 0.9208 |
| No log | 21.0 | 462 | 0.3916 | 0.7853 | 0.8775 | 0.8289 | 0.9221 |
| No log | 22.0 | 484 | 0.3884 | 0.7938 | 0.8806 | 0.8349 | 0.9231 |
| 0.2377 | 23.0 | 506 | 0.3926 | 0.7921 | 0.8715 | 0.8299 | 0.9219 |
| 0.2377 | 24.0 | 528 | 0.3951 | 0.7956 | 0.8785 | 0.8350 | 0.9239 |
| 0.2377 | 25.0 | 550 | 0.3941 | 0.7920 | 0.8785 | 0.8330 | 0.9229 |
| 0.2377 | 26.0 | 572 | 0.3970 | 0.7934 | 0.8785 | 0.8338 | 0.9236 |
| 0.2377 | 27.0 | 594 | 0.3979 | 0.7965 | 0.8796 | 0.8360 | 0.9241 |
| 0.2377 | 28.0 | 616 | 0.3999 | 0.7949 | 0.8785 | 0.8346 | 0.9236 |
| 0.2377 | 29.0 | 638 | 0.4001 | 0.7925 | 0.8775 | 0.8329 | 0.9233 |
| 0.2377 | 30.0 | 660 | 0.3995 | 0.7932 | 0.8775 | 0.8333 | 0.9231 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "manucos/finetuned__roberta-clinical-wl-es__augmented-ultrasounds", "model-index": [{"name": "finetuned__roberta-clinical-wl-es__augmented-ultrasounds-ner", "results": []}]} | manucos/finetuned__roberta-clinical-wl-es__augmented-ultrasounds-ner | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:manucos/finetuned__roberta-clinical-wl-es__augmented-ultrasounds",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T23:35:24+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #roberta #token-classification #generated_from_trainer #base_model-manucos/finetuned__roberta-clinical-wl-es__augmented-ultrasounds #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| finetuned\_\_roberta-clinical-wl-es\_\_augmented-ultrasounds-ner
================================================================
This model is a fine-tuned version of manucos/finetuned\_\_roberta-clinical-wl-es\_\_augmented-ultrasounds on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3995
* Precision: 0.7932
* Recall: 0.8775
* F1: 0.8333
* Accuracy: 0.9231
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #token-classification #generated_from_trainer #base_model-manucos/finetuned__roberta-clinical-wl-es__augmented-ultrasounds #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-driverbox
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0497
- Accuracy: 0.9880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.3349 | 0.9950 | 99 | 0.2700 | 0.9328 |
| 0.2393 | 2.0 | 199 | 0.1932 | 0.9540 |
| 0.1831 | 2.9950 | 298 | 0.1403 | 0.9618 |
| 0.1397 | 4.0 | 398 | 0.1055 | 0.9689 |
| 0.0795 | 4.9950 | 497 | 0.1030 | 0.9731 |
| 0.0915 | 6.0 | 597 | 0.0966 | 0.9703 |
| 0.0718 | 6.9950 | 696 | 0.0779 | 0.9745 |
| 0.0502 | 8.0 | 796 | 0.0729 | 0.9788 |
| 0.0314 | 8.9950 | 895 | 0.0621 | 0.9802 |
| 0.0408 | 10.0 | 995 | 0.0758 | 0.9752 |
| 0.0335 | 10.9950 | 1094 | 0.0598 | 0.9823 |
| 0.0228 | 12.0 | 1194 | 0.0573 | 0.9823 |
| 0.0229 | 12.9950 | 1293 | 0.0473 | 0.9844 |
| 0.0119 | 14.0 | 1393 | 0.0642 | 0.9844 |
| 0.028 | 14.9950 | 1492 | 0.0526 | 0.9851 |
| 0.0117 | 16.0 | 1592 | 0.0594 | 0.9837 |
| 0.0187 | 16.9950 | 1691 | 0.0497 | 0.9880 |
| 0.0131 | 18.0 | 1791 | 0.0663 | 0.9837 |
| 0.0132 | 18.9950 | 1890 | 0.0478 | 0.9866 |
| 0.014 | 20.0 | 1990 | 0.0465 | 0.9880 |
| 0.0039 | 20.9950 | 2089 | 0.0496 | 0.9851 |
| 0.0102 | 22.0 | 2189 | 0.0468 | 0.9880 |
| 0.0035 | 22.9950 | 2288 | 0.0581 | 0.9866 |
| 0.0071 | 24.0 | 2388 | 0.0519 | 0.9866 |
| 0.0032 | 24.9950 | 2487 | 0.0510 | 0.9880 |
| 0.0049 | 26.0 | 2587 | 0.0575 | 0.9858 |
| 0.0037 | 26.9950 | 2686 | 0.0511 | 0.9880 |
| 0.0029 | 28.0 | 2786 | 0.0484 | 0.9880 |
| 0.0019 | 28.9950 | 2885 | 0.0523 | 0.9866 |
| 0.0058 | 29.8492 | 2970 | 0.0532 | 0.9866 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "facebook/convnext-tiny-224", "model-index": [{"name": "convnext-tiny-224-driverbox", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9879688605803255, "name": "Accuracy"}]}]}]} | samayl24/convnext-tiny-224-driverbox | null | [
"transformers",
"safetensors",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/convnext-tiny-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T23:38:10+00:00 | [] | [] | TAGS
#transformers #safetensors #convnext #image-classification #generated_from_trainer #dataset-imagefolder #base_model-facebook/convnext-tiny-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| convnext-tiny-224-driverbox
===========================
This model is a fine-tuned version of facebook/convnext-tiny-224 on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0497
* Accuracy: 0.9880
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.3.0+cu118
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #convnext #image-classification #generated_from_trainer #dataset-imagefolder #base_model-facebook/convnext-tiny-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | El-chapoo/R_geema-100m | null | [
"transformers",
"safetensors",
"recurrent_gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T23:38:24+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #recurrent_gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #recurrent_gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft | ## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
| {"library_name": "peft"} | alexakkol/mistral-7B-v0.2-a1-adapter | null | [
"peft",
"safetensors",
"region:us"
] | null | 2024-04-23T23:38:26+00:00 | [] | [] | TAGS
#peft #safetensors #region-us
| ## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
| [
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.4.0"
] | [
"TAGS\n#peft #safetensors #region-us \n",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.4.0"
] |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "diffusers"} | rubbrband/realMINT_theos | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-23T23:41:49+00:00 | [
"1910.09700"
] | [] | TAGS
#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
4-bit [OmniQuant](https://arxiv.org/abs/2308.13137) quantized version of [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
| {"license": "mit"} | numen-tech/Phi-3-mini-4k-instruct-w4a16g128asym | null | [
"arxiv:2308.13137",
"license:mit",
"region:us"
] | null | 2024-04-23T23:43:24+00:00 | [
"2308.13137"
] | [] | TAGS
#arxiv-2308.13137 #license-mit #region-us
|
4-bit OmniQuant quantized version of Phi-3-mini-4k-instruct.
| [] | [
"TAGS\n#arxiv-2308.13137 #license-mit #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | heyllm234/sc69 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T23:44:21+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Uploaded model
- **Developed by:** andrewatef
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/tinyllama-bnb-4bit"} | andrewatef/ReSV01Main | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T23:45:18+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/tinyllama-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: andrewatef
- License: apache-2.0
- Finetuned from model : unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: andrewatef\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/tinyllama-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: andrewatef\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
video-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-ssbd-trim-yolo
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0946
- Accuracy: 0.9844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 9090
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.6749 | 0.0333 | 303 | 0.7530 | 0.6452 |
| 0.9597 | 1.0333 | 606 | 1.4679 | 0.7137 |
| 0.5435 | 2.0333 | 909 | 0.4142 | 0.8992 |
| 0.5258 | 3.0333 | 1212 | 0.8016 | 0.8226 |
| 0.0066 | 4.0333 | 1515 | 0.3179 | 0.9194 |
| 1.5098 | 5.0333 | 1818 | 1.1273 | 0.7702 |
| 0.0016 | 6.0333 | 2121 | 0.1973 | 0.9637 |
| 0.2736 | 7.0333 | 2424 | 1.0816 | 0.7984 |
| 0.6896 | 8.0333 | 2727 | 0.4091 | 0.9194 |
| 0.2791 | 9.0333 | 3030 | 0.2331 | 0.9597 |
| 0.4142 | 10.0333 | 3333 | 0.3057 | 0.9315 |
| 0.3853 | 11.0333 | 3636 | 0.3853 | 0.9274 |
| 0.0004 | 12.0333 | 3939 | 0.1782 | 0.9718 |
| 0.0164 | 13.0333 | 4242 | 0.5571 | 0.9032 |
| 0.0002 | 14.0333 | 4545 | 0.1784 | 0.9597 |
| 0.1403 | 15.0333 | 4848 | 0.1136 | 0.9758 |
| 0.0003 | 16.0333 | 5151 | 0.1628 | 0.9677 |
| 0.2939 | 17.0333 | 5454 | 0.1729 | 0.9718 |
| 0.0001 | 18.0333 | 5757 | 0.1332 | 0.9718 |
| 0.0002 | 19.0333 | 6060 | 0.2212 | 0.9637 |
| 0.0007 | 20.0333 | 6363 | 0.4098 | 0.9274 |
| 0.02 | 21.0333 | 6666 | 0.3855 | 0.9395 |
| 0.0164 | 22.0333 | 6969 | 0.2359 | 0.9597 |
| 0.0002 | 23.0333 | 7272 | 0.2383 | 0.9677 |
| 0.0001 | 24.0333 | 7575 | 0.1351 | 0.9798 |
| 0.1064 | 25.0333 | 7878 | 0.1471 | 0.9798 |
| 0.0001 | 26.0333 | 8181 | 0.1395 | 0.9798 |
| 0.0001 | 27.0333 | 8484 | 0.1639 | 0.9758 |
| 0.0001 | 28.0333 | 8787 | 0.2662 | 0.9637 |
| 0.0001 | 29.0333 | 9090 | 0.2468 | 0.9677 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "cc-by-nc-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "MCG-NJU/videomae-base", "model-index": [{"name": "videomae-base-ssbd-trim-yolo", "results": []}]} | Amit7Singh/videomae-base-ssbd-trim-yolo | null | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T23:52:18+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #videomae #video-classification #generated_from_trainer #base_model-MCG-NJU/videomae-base #license-cc-by-nc-4.0 #endpoints_compatible #region-us
| videomae-base-ssbd-trim-yolo
============================
This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0946
* Accuracy: 0.9844
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* training\_steps: 9090
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 9090",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #videomae #video-classification #generated_from_trainer #base_model-MCG-NJU/videomae-base #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 9090",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | null |
3-bit [OmniQuant](https://arxiv.org/abs/2308.13137) quantized version of [Llama-3-Smaug-8B](https://huggingface.co/abacusai/Llama-3-Smaug-8B).
| {"license": "other", "license_name": "llama3", "license_link": "LICENSE"} | numen-tech/Llama-3-Smaug-8B-w3a16g40sym | null | [
"arxiv:2308.13137",
"license:other",
"region:us"
] | null | 2024-04-23T23:53:22+00:00 | [
"2308.13137"
] | [] | TAGS
#arxiv-2308.13137 #license-other #region-us
|
3-bit OmniQuant quantized version of Llama-3-Smaug-8B.
| [] | [
"TAGS\n#arxiv-2308.13137 #license-other #region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.