pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # v2-WtP-FT-6L-256BS-UD-cUD-Opus-cOpus This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1245 - Precision: 0.4897 - Recall: 0.835 - F1: 0.6174 - Threshold: 0.2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 512 - eval_batch_size: 512 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Threshold | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:---------:| | No log | 1.16 | 500 | 0.0434 | 0.9278 | 0.9 | 0.9137 | 0.5 | | No log | 1.16 | 500 | 0.0191 | 0.8513 | 0.83 | 0.8405 | 0.4 | | No log | 1.16 | 500 | 0.0415 | 0.8883 | 0.795 | 0.8391 | 0.6 | | No log | 1.16 | 500 | 0.0180 | 0.8008 | 0.985 | 0.8834 | 0.1 | | No log | 1.16 | 500 | 0.0324 | 0.9154 | 0.92 | 0.9177 | 0.4 | | No log | 1.16 | 500 | 0.0131 | 0.9614 | 0.995 | 0.9779 | 0.5 | | No log | 1.16 | 500 | 0.0202 | 0.9282 | 0.9095 | 0.9188 | 0.6 | | No log | 1.16 | 500 | 0.0134 | 0.9522 | 0.995 | 0.9731 | 0.3000 | | No log | 1.16 | 500 | 0.0173 | 0.9387 | 0.995 | 0.9660 | 0.7000 | | No log | 1.16 | 500 | 0.0485 | 0.8818 | 0.895 | 0.8883 | 0.4 | | No log | 1.16 | 500 | 0.0139 | 0.9476 | 0.995 | 0.9707 | 0.8 | | No log | 1.16 | 500 | 0.0183 | 0.94 | 0.94 | 0.94 | 0.8 | | No log | 1.16 | 500 | 0.0107 | 0.9565 | 0.99 | 0.9730 | 0.8 | | No log | 1.16 | 500 | 0.0250 | 0.9259 | 1.0 | 0.9615 | 0.6 | | No log | 1.16 | 500 | 0.0161 | 0.9567 | 0.995 | 0.9755 | 0.7000 | | No log | 1.16 | 500 | 0.0147 | 0.9296 | 0.99 | 0.9588 | 0.6 | | No log | 1.16 | 500 | 0.0202 | 0.9112 | 0.9898 | 0.9489 | 0.8 | | No log | 1.16 | 500 | 0.0147 | 0.9209 | 0.99 | 0.9542 | 0.7000 | | No log | 1.16 | 500 | 0.0545 | 0.8557 | 0.8342 | 0.8448 | 0.6 | | No log | 1.16 | 500 | 0.0198 | 0.9299 | 0.995 | 0.9614 | 0.4 | | No log | 1.16 | 500 | 0.0151 | 0.9194 | 0.97 | 0.9440 | 0.4 | | No log | 1.16 | 500 | 0.0509 | 0.9265 | 0.945 | 0.9356 | 0.2 | | No log | 1.16 | 500 | 0.0086 | 0.9639 | 0.935 | 0.9492 | 0.7000 | | No log | 1.16 | 500 | 0.0089 | 0.9794 | 0.9645 | 0.9719 | 0.4 | | No log | 1.16 | 500 | 0.0232 | 0.9384 | 0.99 | 0.9635 | 0.4 | | No log | 1.16 | 500 | 0.0538 | 0.8782 | 0.865 | 0.8715 | 0.4 | | No log | 1.16 | 500 | 0.0086 | 0.8962 | 0.9548 | 0.9246 | 0.4 | | No log | 1.16 | 500 | 0.0232 | 0.9155 | 0.975 | 0.9443 | 0.5 | | No log | 1.16 | 500 | 0.0230 | 0.8867 | 0.9 | 0.8933 | 0.5 | | No log | 1.16 | 500 | 0.0173 | 0.9474 | 0.99 | 0.9682 | 0.4 | | No log | 1.16 | 500 | 0.0403 | 0.9183 | 0.955 | 0.9363 | 0.2 | | No log | 1.16 | 500 | 0.0321 | 0.9330 | 0.975 | 0.9535 | 0.7000 | | No log | 1.16 | 500 | 0.0152 | 0.9238 | 0.97 | 0.9463 | 0.2 | | No log | 1.16 | 500 | 0.0117 | 0.9660 | 0.995 | 0.9803 | 0.3000 | | No log | 1.16 | 500 | 0.0112 | 0.9657 | 0.985 | 0.9752 | 0.5 | | No log | 1.16 | 500 | 0.1128 | 0.6902 | 0.88 | 0.7736 | 0.094 | | No log | 1.16 | 500 | 0.0115 | 0.9381 | 0.985 | 0.9610 | 0.3000 | | No log | 1.16 | 500 | 0.0278 | 0.9749 | 0.97 | 0.9724 | 0.3000 | | No log | 1.16 | 500 | 0.1018 | 0.8540 | 0.5879 | 0.6964 | 0.3000 | | No log | 1.16 | 500 | 0.0527 | 0.9239 | 0.91 | 0.9169 | 0.3000 | | No log | 1.16 | 500 | 0.0631 | 0.8178 | 0.875 | 0.8454 | 0.6 | | No log | 1.16 | 500 | 0.0120 | 0.9590 | 0.935 | 0.9468 | 0.3000 | | No log | 1.16 | 500 | 0.0146 | 0.9752 | 0.985 | 0.9801 | 0.4 | | No log | 1.16 | 500 | 0.0082 | 0.97 | 0.9749 | 0.9724 | 0.4 | | No log | 1.16 | 500 | 0.0082 | 0.9793 | 0.945 | 0.9618 | 0.9 | | No log | 1.16 | 500 | 0.0064 | 0.9252 | 0.99 | 0.9565 | 0.6 | | No log | 1.16 | 500 | 0.0112 | 0.9479 | 1.0 | 0.9732 | 0.5 | | No log | 1.16 | 500 | 0.0379 | 0.8900 | 0.93 | 0.9095 | 0.6 | | No log | 1.16 | 500 | 0.0216 | 0.9707 | 0.995 | 0.9827 | 0.2 | | No log | 1.16 | 500 | 0.0249 | 0.9606 | 0.975 | 0.9677 | 0.6 | | No log | 1.16 | 500 | 0.0234 | 0.9299 | 0.995 | 0.9614 | 0.5 | | No log | 1.16 | 500 | 0.0078 | 0.9069 | 0.9391 | 0.9227 | 0.8 | | No log | 1.16 | 500 | 0.1016 | 0.6579 | 0.75 | 0.7009 | 0.097 | | No log | 1.16 | 500 | 0.0663 | 0.8673 | 0.85 | 0.8586 | 0.4 | | No log | 1.16 | 500 | 0.0183 | 0.9299 | 0.995 | 0.9614 | 0.4 | | No log | 1.16 | 500 | 0.0157 | 0.9569 | 1.0 | 0.9780 | 0.4 | | No log | 1.16 | 500 | 0.1070 | 0.7597 | 0.885 | 0.8176 | 0.2 | | No log | 1.16 | 500 | 0.0103 | 0.9522 | 0.995 | 0.9731 | 0.4 | | No log | 1.16 | 500 | 0.1141 | 0.9006 | 0.815 | 0.8556 | 0.5 | | No log | 1.16 | 500 | 0.0120 | 0.9479 | 1.0 | 0.9732 | 0.5 | | No log | 1.16 | 500 | 0.0110 | 0.985 | 0.985 | 0.985 | 0.8 | | No log | 1.16 | 500 | 0.0107 | 0.9282 | 0.97 | 0.9487 | 0.7000 | | No log | 1.16 | 500 | 0.0513 | 0.8676 | 0.95 | 0.9069 | 0.3000 | | No log | 1.16 | 500 | 0.0068 | 0.9755 | 0.995 | 0.9851 | 0.8 | | No log | 1.16 | 500 | 0.0141 | 0.9517 | 0.985 | 0.9681 | 0.8 | | No log | 1.16 | 500 | 0.0156 | 0.9381 | 0.985 | 0.9610 | 0.8 | | No log | 1.16 | 500 | 0.0167 | 0.9567 | 0.995 | 0.9755 | 0.4 | | No log | 1.16 | 500 | 0.0103 | 0.9660 | 0.995 | 0.9803 | 0.6 | | No log | 1.16 | 500 | 0.0333 | 0.8912 | 0.86 | 0.8753 | 0.4 | | No log | 1.16 | 500 | 0.0625 | 0.885 | 0.885 | 0.885 | 0.5 | | No log | 1.16 | 500 | 0.0120 | 0.8940 | 0.97 | 0.9305 | 0.3000 | | No log | 1.16 | 500 | 0.0762 | 0.7521 | 0.88 | 0.8111 | 0.5 | | No log | 1.16 | 500 | 0.0141 | 0.9563 | 0.985 | 0.9704 | 0.7000 | | No log | 1.16 | 500 | 0.0295 | 0.9 | 0.99 | 0.9429 | 0.5 | | No log | 1.16 | 500 | 0.0245 | 0.8153 | 0.905 | 0.8578 | 0.5 | | No log | 1.16 | 500 | 0.0086 | 0.8848 | 0.96 | 0.9209 | 0.3000 | | No log | 1.16 | 500 | 0.0242 | 0.9340 | 0.99 | 0.9612 | 0.2 | | No log | 1.16 | 500 | 0.0178 | 0.9624 | 0.895 | 0.9275 | 0.7000 | | No log | 1.16 | 500 | 0.0827 | 0.8838 | 0.875 | 0.8794 | 0.2 | | No log | 1.16 | 500 | 0.0289 | 0.8756 | 0.88 | 0.8778 | 0.4 | | No log | 1.16 | 500 | 0.0544 | 0.7155 | 0.8636 | 0.7826 | 0.023 | | No log | 1.16 | 500 | 0.0314 | 0.9531 | 0.915 | 0.9337 | 0.6 | | No log | 1.16 | 500 | 0.1287 | 0.8596 | 0.735 | 0.7925 | 0.6 | | No log | 1.16 | 500 | 0.0301 | 0.7822 | 0.79 | 0.7861 | 0.3000 | | No log | 1.16 | 500 | 0.0716 | 0.7366 | 0.755 | 0.7457 | 0.5 | | No log | 1.16 | 500 | 0.0332 | 0.8223 | 0.81 | 0.8161 | 0.4 | | No log | 1.16 | 500 | 0.0866 | 0.7962 | 0.84 | 0.8175 | 0.4 | | No log | 1.16 | 500 | 0.1044 | 0.7422 | 0.835 | 0.7859 | 0.5 | | No log | 1.16 | 500 | 0.0676 | 0.5732 | 0.4724 | 0.5179 | 0.4 | | No log | 1.16 | 500 | 0.0654 | 0.7664 | 0.82 | 0.7923 | 0.5 | | No log | 1.16 | 500 | 0.0689 | 0.7644 | 0.86 | 0.8094 | 0.5 | | No log | 1.16 | 500 | 0.1153 | 0.7202 | 0.785 | 0.7512 | 0.4 | | No log | 1.16 | 500 | 0.0505 | 0.7850 | 0.84 | 0.8116 | 0.5 | | No log | 1.16 | 500 | 0.0412 | 0.7971 | 0.825 | 0.8108 | 0.5 | | No log | 1.16 | 500 | 0.0775 | 0.7868 | 0.775 | 0.7809 | 0.6 | | No log | 1.16 | 500 | 0.0837 | 0.7973 | 0.885 | 0.8389 | 0.5 | | No log | 1.16 | 500 | 0.0777 | 0.8222 | 0.925 | 0.8706 | 0.5 | | No log | 1.16 | 500 | 0.0612 | 0.6898 | 0.845 | 0.7596 | 0.4 | | No log | 1.16 | 500 | 0.0510 | 0.7193 | 0.8241 | 0.7681 | 0.5 | | No log | 1.16 | 500 | 0.0685 | 0.7123 | 0.755 | 0.7330 | 0.5 | | No log | 1.16 | 500 | 0.0864 | 0.6964 | 0.78 | 0.7358 | 0.5 | | No log | 1.16 | 500 | 0.0702 | 0.7864 | 0.865 | 0.8238 | 0.5 | | No log | 1.16 | 500 | 0.0492 | 0.8085 | 0.76 | 0.7835 | 0.5 | | No log | 1.16 | 500 | 0.1638 | 0.6754 | 0.905 | 0.7735 | 0.1 | | No log | 1.16 | 500 | 0.0271 | 0.8144 | 0.79 | 0.8020 | 0.5 | | No log | 1.16 | 500 | 0.0491 | 0.7970 | 0.8173 | 0.8070 | 0.4 | | No log | 1.16 | 500 | 0.0909 | 0.7890 | 0.86 | 0.8230 | 0.5 | | No log | 1.16 | 500 | 0.0879 | 0.7014 | 0.74 | 0.7202 | 0.3000 | | No log | 1.16 | 500 | 0.0302 | 0.7614 | 0.67 | 0.7128 | 0.5 | | No log | 1.16 | 500 | 0.0987 | 0.7017 | 0.835 | 0.7626 | 0.5 | | No log | 1.16 | 500 | 0.0420 | 0.8192 | 0.725 | 0.7692 | 0.4 | | No log | 1.16 | 500 | 0.0809 | 0.7339 | 0.8 | 0.7656 | 0.4 | | No log | 1.16 | 500 | 0.1036 | 0.8135 | 0.785 | 0.7990 | 0.5 | | No log | 1.16 | 500 | 0.1536 | 0.6111 | 0.825 | 0.7021 | 0.4 | | No log | 1.16 | 500 | 0.0835 | 0.7387 | 0.735 | 0.7368 | 0.4 | | No log | 1.16 | 500 | 0.0892 | 0.7027 | 0.78 | 0.7393 | 0.4 | | No log | 1.16 | 500 | 0.0722 | 0.7442 | 0.8 | 0.7711 | 0.5 | | No log | 1.16 | 500 | 0.2206 | 0.5926 | 0.8 | 0.6809 | 0.07 | | No log | 1.16 | 500 | 0.0856 | 0.4067 | 0.545 | 0.4658 | 0.3000 | | No log | 1.16 | 500 | 0.1211 | 0.8244 | 0.845 | 0.8346 | 0.3000 | | No log | 1.16 | 500 | 0.1511 | 0.5926 | 0.3216 | 0.4169 | 0.3000 | | No log | 1.16 | 500 | 0.0930 | 0.8247 | 0.8 | 0.8122 | 0.3000 | | No log | 1.16 | 500 | 0.0790 | 0.7416 | 0.775 | 0.7579 | 0.6 | | No log | 1.16 | 500 | 0.0332 | 0.8632 | 0.82 | 0.8410 | 0.3000 | | No log | 1.16 | 500 | 0.0936 | 0.8309 | 0.86 | 0.8452 | 0.4 | | No log | 1.16 | 500 | 0.0383 | 0.6967 | 0.8543 | 0.7675 | 0.3000 | | No log | 1.16 | 500 | 0.0403 | 0.7442 | 0.8 | 0.7711 | 0.4 | | No log | 1.16 | 500 | 0.0428 | 0.5765 | 0.735 | 0.6462 | 0.4 | | No log | 1.16 | 500 | 0.0788 | 0.81 | 0.81 | 0.81 | 0.6 | | No log | 1.16 | 500 | 0.0661 | 0.6518 | 0.73 | 0.6887 | 0.4 | | No log | 1.16 | 500 | 0.1128 | 0.8106 | 0.92 | 0.8618 | 0.2 | | No log | 1.16 | 500 | 0.0699 | 0.8197 | 0.75 | 0.7833 | 0.5 | | No log | 1.16 | 500 | 0.1328 | 0.7190 | 0.755 | 0.7366 | 0.6 | | No log | 1.16 | 500 | 0.0218 | 0.8 | 0.6633 | 0.7253 | 0.6 | | No log | 1.16 | 500 | 0.2094 | 0.1766 | 0.695 | 0.2817 | 0.004 | | No log | 1.16 | 500 | 0.1217 | 0.73 | 0.73 | 0.7300 | 0.4 | | No log | 1.16 | 500 | 0.0711 | 0.7808 | 0.855 | 0.8162 | 0.6 | | No log | 1.16 | 500 | 0.0727 | 0.8571 | 0.87 | 0.8635 | 0.6 | | No log | 1.16 | 500 | 0.1221 | 0.8302 | 0.66 | 0.7354 | 0.4 | | No log | 1.16 | 500 | 0.0725 | 0.7545 | 0.83 | 0.7905 | 0.5 | | No log | 1.16 | 500 | 0.1232 | 0.7743 | 0.875 | 0.8216 | 0.3000 | | No log | 1.16 | 500 | 0.0579 | 0.7038 | 0.915 | 0.7957 | 0.4 | | No log | 1.16 | 500 | 0.0880 | 0.7723 | 0.865 | 0.8160 | 0.5 | | No log | 1.16 | 500 | 0.0243 | 0.7880 | 0.855 | 0.8201 | 0.5 | | No log | 1.16 | 500 | 0.1079 | 0.7059 | 0.66 | 0.6822 | 0.5 | | No log | 1.16 | 500 | 0.0543 | 0.7658 | 0.85 | 0.8057 | 0.6 | | No log | 1.16 | 500 | 0.0755 | 0.7288 | 0.86 | 0.7890 | 0.4 | | No log | 1.16 | 500 | 0.0548 | 0.6895 | 0.855 | 0.7634 | 0.4 | | No log | 1.16 | 500 | 0.1100 | 0.7901 | 0.715 | 0.7507 | 0.6 | | No log | 1.16 | 500 | 0.0511 | 0.8063 | 0.895 | 0.8483 | 0.5 | | No log | 1.16 | 500 | 0.0792 | 0.6556 | 0.59 | 0.6211 | 0.3000 | | No log | 1.16 | 500 | 0.1288 | 0.7219 | 0.675 | 0.6977 | 0.4 | | No log | 1.16 | 500 | 0.0312 | 0.7980 | 0.79 | 0.7940 | 0.4 | | No log | 1.16 | 500 | 0.0814 | 0.7066 | 0.915 | 0.7974 | 0.4 | | No log | 1.16 | 500 | 0.0979 | 0.7820 | 0.825 | 0.8029 | 0.5 | | No log | 1.16 | 500 | 0.0855 | 0.7727 | 0.765 | 0.7688 | 0.6 | | No log | 1.16 | 500 | 0.0378 | 0.7571 | 0.795 | 0.7756 | 0.5 | | No log | 1.16 | 500 | 0.0206 | 0.8366 | 0.845 | 0.8408 | 0.4 | | No log | 1.16 | 500 | 0.1239 | 0.6254 | 0.91 | 0.7413 | 0.3000 | | No log | 1.16 | 500 | 0.0673 | 0.5375 | 0.68 | 0.6004 | 0.3000 | | No log | 1.16 | 500 | 0.1328 | 0.7298 | 0.905 | 0.8080 | 0.0720 | | No log | 1.16 | 500 | 0.1077 | 0.4224 | 0.64 | 0.5089 | 0.0710 | | No log | 1.16 | 500 | 0.0766 | 0.5543 | 0.7727 | 0.6456 | 0.057 | | No log | 1.16 | 500 | 0.0968 | 0.8061 | 0.79 | 0.7980 | 0.4 | | No log | 1.16 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.007 | | No log | 1.16 | 500 | 0.0118 | 0.7644 | 0.8731 | 0.8152 | 0.7000 | | No log | 1.16 | 500 | 0.0042 | 0.9565 | 0.99 | 0.9730 | 0.6 | | No log | 1.16 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.039 | | No log | 1.16 | 500 | 0.0070 | 1.0 | 1.0 | 1.0 | 0.074 | | No log | 1.16 | 500 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.8 | | No log | 1.16 | 500 | 0.0038 | 0.9947 | 1.0 | 0.9973 | 0.0190 | | No log | 1.16 | 500 | 0.0032 | 0.99 | 0.99 | 0.99 | 0.5 | | No log | 1.16 | 500 | 0.0009 | 1.0 | 1.0 | 1.0 | 0.0260 | | No log | 1.16 | 500 | 0.0030 | 0.9900 | 0.995 | 0.9925 | 0.8 | | No log | 1.16 | 500 | 0.0021 | 0.995 | 0.995 | 0.995 | 0.5 | | No log | 1.16 | 500 | 0.0075 | 0.9899 | 0.98 | 0.9849 | 0.0370 | | No log | 1.16 | 500 | 0.0142 | 1.0 | 0.925 | 0.9610 | 0.4 | | No log | 1.16 | 500 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.5 | | No log | 1.16 | 500 | 0.0129 | 1.0 | 0.905 | 0.9501 | 0.7000 | | No log | 1.16 | 500 | 0.0016 | 0.9950 | 1.0 | 0.9975 | 0.2 | | No log | 1.16 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 | | No log | 1.16 | 500 | 0.0031 | 0.9803 | 0.995 | 0.9876 | 0.2 | | No log | 1.16 | 500 | 0.0031 | 0.9899 | 0.98 | 0.9849 | 0.9 | | No log | 1.16 | 500 | 0.0065 | 0.9519 | 0.99 | 0.9706 | 0.6 | | No log | 1.16 | 500 | 0.0404 | 0.8677 | 0.82 | 0.8432 | 0.5 | | No log | 1.16 | 500 | 0.0010 | 0.9950 | 1.0 | 0.9975 | 0.2 | | No log | 1.16 | 500 | 0.0033 | 0.9949 | 0.97 | 0.9823 | 0.5 | | No log | 1.16 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.085 | | No log | 1.16 | 500 | 0.0014 | 0.9852 | 1.0 | 0.9926 | 0.6 | | No log | 1.16 | 500 | 0.0023 | 0.9950 | 0.99 | 0.9925 | 0.092 | | No log | 1.16 | 500 | 0.0009 | 1.0 | 0.995 | 0.9975 | 0.6 | | No log | 1.16 | 500 | 0.0044 | 0.9431 | 0.995 | 0.9684 | 0.4 | | No log | 1.16 | 500 | 0.0012 | 0.9950 | 1.0 | 0.9975 | 0.7000 | | No log | 1.16 | 500 | 0.0159 | 0.9948 | 0.95 | 0.9719 | 0.3000 | | No log | 1.16 | 500 | 0.0011 | 1.0 | 1.0 | 1.0 | 0.6 | | No log | 1.16 | 500 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.025 | | No log | 1.16 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.0090 | | No log | 1.16 | 500 | 0.0031 | 0.9852 | 1.0 | 0.9926 | 0.6 | | No log | 1.16 | 500 | 0.0025 | 0.9804 | 1.0 | 0.9901 | 0.0720 | | No log | 1.16 | 500 | 0.0007 | 1.0 | 0.995 | 0.9975 | 0.2 | | No log | 1.16 | 500 | 0.0063 | 0.9792 | 1.0 | 0.9895 | 0.4 | | No log | 1.16 | 500 | 0.0065 | 0.8767 | 0.995 | 0.9321 | 0.075 | | No log | 1.16 | 500 | 0.0076 | 0.9652 | 0.97 | 0.9676 | 0.9 | | No log | 1.16 | 500 | 0.0029 | 0.9803 | 0.995 | 0.9876 | 0.4 | | No log | 1.16 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.004 | | No log | 1.16 | 500 | 0.0018 | 0.9900 | 0.995 | 0.9925 | 0.4 | | No log | 1.16 | 500 | 0.0025 | 0.9899 | 0.985 | 0.9875 | 0.4 | | No log | 1.16 | 500 | 0.0186 | 0.9065 | 0.97 | 0.9372 | 0.6 | | No log | 1.16 | 500 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.4 | | No log | 1.16 | 500 | 0.0029 | 0.99 | 0.99 | 0.99 | 0.7000 | | No log | 1.16 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.3000 | | No log | 1.16 | 500 | 0.0004 | 0.9950 | 1.0 | 0.9975 | 0.4 | | No log | 1.16 | 500 | 0.0139 | 0.9167 | 0.99 | 0.9519 | 0.083 | | No log | 1.16 | 500 | 0.0013 | 0.9917 | 1.0 | 0.9959 | 0.2 | | No log | 1.16 | 500 | 0.0196 | 0.9149 | 0.86 | 0.8866 | 0.4 | | No log | 1.16 | 500 | 0.0108 | 0.9901 | 1.0 | 0.9950 | 0.007 | | No log | 1.16 | 500 | 0.0013 | 0.9851 | 0.995 | 0.9900 | 0.3000 | | No log | 1.16 | 500 | 0.0012 | 0.9901 | 1.0 | 0.9950 | 0.4 | | No log | 1.16 | 500 | 0.0026 | 0.9900 | 0.995 | 0.9925 | 0.5 | | No log | 1.16 | 500 | 0.0100 | 0.7738 | 0.975 | 0.8628 | 0.095 | | No log | 1.16 | 500 | 0.0162 | 0.9706 | 0.99 | 0.9802 | 0.017 | | No log | 1.16 | 500 | 0.0101 | 0.9545 | 0.945 | 0.9497 | 0.6 | | No log | 1.16 | 500 | 0.0198 | 0.7143 | 0.7614 | 0.7371 | 0.7000 | | No log | 1.16 | 500 | 0.0158 | 0.8186 | 0.835 | 0.8267 | 0.5 | | No log | 1.16 | 500 | 0.0638 | 0.7972 | 0.845 | 0.8204 | 0.5 | | No log | 1.16 | 500 | 0.1630 | 0.5694 | 0.7321 | 0.6406 | 0.099 | | No log | 1.16 | 500 | 0.0104 | 0.9223 | 0.89 | 0.9059 | 0.5 | | No log | 1.16 | 500 | 0.0960 | 0.7107 | 0.9149 | 0.8000 | 0.2 | | No log | 1.16 | 500 | 0.0497 | 0.7706 | 0.84 | 0.8038 | 0.3000 | | No log | 1.16 | 500 | 0.0412 | 0.7861 | 0.79 | 0.7880 | 0.4 | | No log | 1.16 | 500 | 0.0381 | 0.8557 | 0.86 | 0.8579 | 0.4 | | No log | 1.16 | 500 | 0.0393 | 0.8634 | 0.885 | 0.8741 | 0.5 | | No log | 1.16 | 500 | 0.0290 | 0.8534 | 0.815 | 0.8338 | 0.4 | | No log | 1.16 | 500 | 0.0467 | 0.7465 | 0.795 | 0.7700 | 0.4 | | No log | 1.16 | 500 | 0.0302 | 0.6774 | 0.84 | 0.75 | 0.3000 | | No log | 1.16 | 500 | 0.0541 | 0.7039 | 0.725 | 0.7143 | 0.4 | | No log | 1.16 | 500 | 0.0641 | 0.7598 | 0.775 | 0.7673 | 0.4 | | No log | 1.16 | 500 | 0.0072 | 0.9801 | 0.985 | 0.9825 | 0.4 | | No log | 1.16 | 500 | 0.0496 | 0.7022 | 0.79 | 0.7435 | 0.4 | | No log | 1.16 | 500 | 0.0350 | 0.8272 | 0.67 | 0.7403 | 0.6 | | No log | 1.16 | 500 | 0.0393 | 0.7660 | 0.7273 | 0.7461 | 0.5 | | No log | 1.16 | 500 | 0.0703 | 0.6478 | 0.7487 | 0.6946 | 0.4 | | No log | 1.16 | 500 | 0.0331 | 0.6742 | 0.89 | 0.7672 | 0.2 | | No log | 1.16 | 500 | 0.0249 | 0.8137 | 0.83 | 0.8218 | 0.4 | | No log | 1.16 | 500 | 0.0120 | 0.9126 | 0.94 | 0.9261 | 0.3000 | | No log | 1.16 | 500 | 0.0433 | 0.6931 | 0.7 | 0.6965 | 0.5 | | No log | 1.16 | 500 | 0.0411 | 0.7537 | 0.765 | 0.7593 | 0.4 | | No log | 1.16 | 500 | 0.0438 | 0.8 | 0.78 | 0.7899 | 0.5 | | No log | 1.16 | 500 | 0.0347 | 0.7143 | 0.65 | 0.6806 | 0.5 | | No log | 1.16 | 500 | 0.0424 | 0.7083 | 0.765 | 0.7356 | 0.4 | | No log | 1.16 | 500 | 0.0448 | 0.9176 | 0.835 | 0.8743 | 0.2 | | No log | 1.16 | 500 | 0.0842 | 0.6651 | 0.695 | 0.6797 | 0.3000 | | No log | 1.16 | 500 | 0.0228 | 0.9742 | 0.945 | 0.9594 | 0.3000 | | No log | 1.16 | 500 | 0.0006 | 1.0 | 1.0 | 1.0 | 0.2 | | No log | 1.16 | 500 | 0.0236 | 0.9038 | 0.94 | 0.9216 | 0.3000 | | No log | 1.16 | 500 | 0.0525 | 0.75 | 0.66 | 0.7021 | 0.5 | | No log | 1.16 | 500 | 0.0555 | 0.6818 | 0.75 | 0.7143 | 0.4 | | No log | 1.16 | 500 | 0.1490 | 0.5085 | 0.6383 | 0.5660 | 0.2 | | No log | 1.16 | 500 | 0.0387 | 0.6105 | 0.58 | 0.5949 | 0.3000 | | No log | 1.16 | 500 | 0.0294 | 0.7946 | 0.89 | 0.8396 | 0.3000 | | No log | 1.16 | 500 | 0.0372 | 0.7922 | 0.915 | 0.8492 | 0.3000 | | No log | 1.16 | 500 | 0.0176 | 0.9492 | 0.935 | 0.9421 | 0.5 | | No log | 1.16 | 500 | 0.0458 | 0.7910 | 0.795 | 0.7930 | 0.4 | | No log | 1.16 | 500 | 0.0377 | 0.7730 | 0.545 | 0.6393 | 0.6 | | No log | 1.16 | 500 | 0.0846 | 0.5409 | 0.76 | 0.6320 | 0.3000 | | No log | 1.16 | 500 | 0.0317 | 0.8545 | 0.91 | 0.8814 | 0.3000 | | No log | 1.16 | 500 | 0.0301 | 0.8235 | 0.77 | 0.7959 | 0.6 | | No log | 1.16 | 500 | 0.0142 | 1.0 | 0.85 | 0.9189 | 0.7000 | | No log | 1.16 | 500 | 0.0217 | 0.8381 | 0.88 | 0.8585 | 0.4 | | No log | 1.16 | 500 | 0.0364 | 0.9 | 0.855 | 0.8769 | 0.5 | | No log | 1.16 | 500 | 0.0133 | 0.7708 | 0.925 | 0.8409 | 0.2 | | No log | 1.16 | 500 | 0.0228 | 0.8429 | 0.885 | 0.8634 | 0.3000 | | No log | 1.16 | 500 | 0.0728 | 0.8073 | 0.88 | 0.8421 | 0.2 | | No log | 1.16 | 500 | 0.0568 | 0.7045 | 0.62 | 0.6596 | 0.5 | | No log | 1.16 | 500 | 0.0042 | 0.9707 | 0.995 | 0.9827 | 0.4 | | No log | 1.16 | 500 | 0.0944 | 0.6392 | 0.62 | 0.6294 | 0.4 | | No log | 1.16 | 500 | 0.0407 | 0.7471 | 0.325 | 0.4530 | 0.5 | | No log | 1.16 | 500 | 0.0343 | 0.7317 | 0.9 | 0.8072 | 0.099 | | No log | 1.16 | 500 | 0.1198 | 0.5971 | 0.83 | 0.6946 | 0.2 | | No log | 1.16 | 500 | 0.0798 | 0.5342 | 0.82 | 0.6469 | 0.2 | | No log | 1.16 | 500 | 0.0702 | 0.5703 | 0.71 | 0.6325 | 0.2 | | No log | 1.16 | 500 | 0.1308 | 0.7309 | 0.815 | 0.7707 | 0.083 | | No log | 1.16 | 500 | 0.0695 | 0.6533 | 0.735 | 0.6918 | 0.4 | | No log | 1.16 | 500 | 0.0568 | 0.7080 | 0.8 | 0.7512 | 0.3000 | | No log | 1.16 | 500 | 0.0507 | 0.3873 | 0.5528 | 0.4555 | 0.2 | | No log | 1.16 | 500 | 0.0676 | 0.6652 | 0.755 | 0.7073 | 0.4 | | No log | 1.16 | 500 | 0.0738 | 0.5278 | 0.6909 | 0.5984 | 0.3000 | | No log | 1.16 | 500 | 0.0622 | 0.7617 | 0.815 | 0.7874 | 0.2 | | No log | 1.16 | 500 | 0.0622 | 0.7617 | 0.815 | 0.7874 | 0.2 | | No log | 1.16 | 500 | 0.0490 | 0.6744 | 0.87 | 0.7598 | 0.2 | | No log | 1.16 | 500 | 0.0599 | 0.7522 | 0.85 | 0.7981 | 0.3000 | | No log | 1.16 | 500 | 0.0494 | 0.6598 | 0.795 | 0.7211 | 0.2 | | No log | 1.16 | 500 | 0.0554 | 0.7487 | 0.73 | 0.7392 | 0.4 | | No log | 1.16 | 500 | 0.0950 | 0.625 | 0.7 | 0.6604 | 0.3000 | | No log | 1.16 | 500 | 0.0610 | 0.6429 | 0.81 | 0.7168 | 0.3000 | | No log | 1.16 | 500 | 0.0516 | 0.7634 | 0.855 | 0.8066 | 0.3000 | | No log | 1.16 | 500 | 0.1513 | 0.6298 | 0.57 | 0.5984 | 0.3000 | | No log | 1.16 | 500 | 0.0670 | 0.7641 | 0.745 | 0.7544 | 0.4 | | No log | 1.16 | 500 | 0.1174 | 0.6119 | 0.875 | 0.7202 | 0.057 | | No log | 1.16 | 500 | 0.0505 | 0.6996 | 0.815 | 0.7529 | 0.2 | | No log | 1.16 | 500 | 0.0505 | 0.6996 | 0.815 | 0.7529 | 0.2 | | No log | 1.16 | 500 | 0.0366 | 0.6667 | 0.6957 | 0.6809 | 0.5 | | No log | 1.16 | 500 | 0.0366 | 0.6667 | 0.6957 | 0.6809 | 0.5 | | No log | 1.16 | 500 | 0.0568 | 0.7150 | 0.69 | 0.7023 | 0.4 | | No log | 1.16 | 500 | 0.0442 | 0.5284 | 0.465 | 0.4947 | 0.5 | | No log | 1.16 | 500 | 0.0666 | 0.5833 | 0.5385 | 0.5600 | 0.5 | | No log | 1.16 | 500 | 0.0492 | 0.6266 | 0.73 | 0.6744 | 0.5 | | No log | 1.16 | 500 | 0.0466 | 0.4746 | 0.7 | 0.5657 | 0.2 | | No log | 1.16 | 500 | 0.0615 | 0.6923 | 0.765 | 0.7268 | 0.3000 | | No log | 1.16 | 500 | 0.0505 | 0.7374 | 0.66 | 0.6966 | 0.6 | | No log | 1.16 | 500 | 0.0717 | 0.7178 | 0.725 | 0.7214 | 0.4 | | No log | 1.16 | 500 | 0.1014 | 0.2447 | 0.5577 | 0.3402 | 0.2 | | No log | 1.16 | 500 | 0.1001 | 0.6325 | 0.895 | 0.7412 | 0.0880 | | No log | 1.16 | 500 | 0.0660 | 0.6382 | 0.785 | 0.7040 | 0.4 | | No log | 1.16 | 500 | 0.1107 | 0.6081 | 0.83 | 0.7019 | 0.031 | | No log | 1.16 | 500 | 0.0823 | 0.7297 | 0.81 | 0.7678 | 0.6 | | No log | 1.16 | 500 | 0.0758 | 0.5743 | 0.85 | 0.6855 | 0.2 | | No log | 1.16 | 500 | 0.1513 | 0.8238 | 0.795 | 0.8092 | 0.2 | | No log | 1.16 | 500 | 0.1260 | 1.0 | 0.18 | 0.3051 | 0.7000 | | No log | 1.16 | 500 | 0.0688 | 0.4180 | 0.535 | 0.4693 | 0.2 | | No log | 1.16 | 500 | 0.0500 | 0.7910 | 0.965 | 0.8694 | 0.5 | | No log | 1.16 | 500 | 0.0506 | 0.4864 | 0.535 | 0.5095 | 0.08 | | No log | 1.16 | 500 | 0.0654 | 0.7846 | 0.765 | 0.7747 | 0.4 | | No log | 1.16 | 500 | 0.1073 | 0.8333 | 0.4167 | 0.5556 | 0.6 | | No log | 1.16 | 500 | 0.0670 | 0.6527 | 0.78 | 0.7107 | 0.3000 | | No log | 1.16 | 500 | 0.0594 | 0.7249 | 0.83 | 0.7739 | 0.3000 | | No log | 1.16 | 500 | 0.0747 | 0.5721 | 0.635 | 0.6019 | 0.3000 | | No log | 1.16 | 500 | 0.0603 | 0.6522 | 0.825 | 0.7285 | 0.3000 | | No log | 1.16 | 500 | 0.0519 | 0.4906 | 0.65 | 0.5591 | 0.4 | | No log | 1.16 | 500 | 0.0382 | 0.8421 | 0.88 | 0.8606 | 0.2 | | No log | 1.16 | 500 | 0.1042 | 0.3741 | 0.505 | 0.4298 | 0.4 | | No log | 1.16 | 500 | 0.0592 | 0.7179 | 0.8442 | 0.7760 | 0.3000 | | No log | 1.16 | 500 | 0.0977 | 0.3908 | 0.6602 | 0.4910 | 0.096 | | No log | 1.16 | 500 | 0.0970 | 0.8 | 0.2 | 0.3200 | 0.9 | | No log | 1.16 | 500 | 0.0537 | 0.7157 | 0.705 | 0.7103 | 0.3000 | | No log | 1.16 | 500 | 0.0795 | 0.7511 | 0.83 | 0.7886 | 0.2 | | No log | 1.16 | 500 | 0.0795 | 0.7511 | 0.83 | 0.7886 | 0.2 | | No log | 1.16 | 500 | 0.0514 | 0.6170 | 0.58 | 0.5979 | 0.3000 | | No log | 1.16 | 500 | 0.0780 | 0.7308 | 0.8636 | 0.7917 | 0.3000 | | No log | 1.16 | 500 | 0.0502 | 0.6457 | 0.7236 | 0.6825 | 0.4 | | No log | 1.16 | 500 | 0.0612 | 0.6971 | 0.84 | 0.7619 | 0.4 | | No log | 1.16 | 500 | 0.0570 | 0.7176 | 0.775 | 0.7452 | 0.3000 | | No log | 1.16 | 500 | 0.0485 | 0.7202 | 0.785 | 0.7512 | 0.3000 | | No log | 1.16 | 500 | 0.0614 | 0.4980 | 0.62 | 0.5523 | 0.4 | | No log | 1.16 | 500 | 0.0666 | 0.6741 | 0.755 | 0.7123 | 0.4 | | No log | 1.16 | 500 | 0.0562 | 0.6838 | 0.93 | 0.7881 | 0.2 | | No log | 1.16 | 500 | 0.0469 | 0.7463 | 0.75 | 0.7481 | 0.4 | | No log | 1.16 | 500 | 0.0447 | 0.6932 | 0.87 | 0.7716 | 0.2 | | No log | 1.16 | 500 | 0.0845 | 0.6007 | 0.85 | 0.7039 | 0.092 | | No log | 1.16 | 500 | 0.0537 | 0.3921 | 0.645 | 0.4877 | 0.085 | | No log | 1.16 | 500 | 0.0529 | 0.4667 | 0.77 | 0.5811 | 0.3000 | | No log | 1.16 | 500 | 0.0588 | 0.6075 | 0.805 | 0.6925 | 0.2 | | No log | 1.16 | 500 | 0.0604 | 0.5241 | 0.815 | 0.6380 | 0.2 | | No log | 1.16 | 500 | 0.0679 | 0.6947 | 0.785 | 0.7371 | 0.4 | | No log | 1.16 | 500 | 0.0505 | 0.6810 | 0.79 | 0.7315 | 0.3000 | | No log | 1.16 | 500 | 0.0925 | 0.5773 | 0.635 | 0.6048 | 0.5 | | No log | 1.16 | 500 | 0.0747 | 0.6870 | 0.845 | 0.7578 | 0.2 | | No log | 1.16 | 500 | 0.0544 | 0.5552 | 0.78 | 0.6486 | 0.3000 | | No log | 1.16 | 500 | 0.0544 | 0.5552 | 0.78 | 0.6486 | 0.3000 | | No log | 1.16 | 500 | 0.0544 | 0.5552 | 0.78 | 0.6486 | 0.3000 | | No log | 1.16 | 500 | 0.0544 | 0.5552 | 0.78 | 0.6486 | 0.3000 | | No log | 1.16 | 500 | 0.1676 | 0.3308 | 0.6633 | 0.4415 | 0.0100 | | No log | 1.16 | 500 | 0.0784 | 0.6507 | 0.6834 | 0.6667 | 0.3000 | | No log | 1.16 | 500 | 0.0163 | 0.9505 | 0.96 | 0.9552 | 0.5 | | No log | 1.16 | 500 | 0.0018 | 0.9950 | 1.0 | 0.9975 | 0.9 | | No log | 1.16 | 500 | 0.0030 | 0.9950 | 0.99 | 0.9925 | 0.3000 | | No log | 1.16 | 500 | 0.0004 | 0.9950 | 1.0 | 0.9975 | 0.2 | | No log | 1.16 | 500 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.5 | | No log | 1.16 | 500 | 0.0010 | 0.9950 | 1.0 | 0.9975 | 0.4 | | No log | 1.16 | 500 | 0.0021 | 0.995 | 0.995 | 0.995 | 0.5 | | No log | 1.16 | 500 | 0.0045 | 0.99 | 0.99 | 0.99 | 0.9 | | No log | 1.16 | 500 | 0.0055 | 0.9848 | 0.975 | 0.9799 | 0.3000 | | No log | 1.16 | 500 | 0.0019 | 0.9900 | 0.995 | 0.9925 | 0.0600 | | No log | 1.16 | 500 | 0.0198 | 0.9674 | 0.89 | 0.9271 | 0.4 | | No log | 1.16 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.2 | | No log | 1.16 | 500 | 0.0410 | 0.8523 | 0.75 | 0.7979 | 0.3000 | | No log | 1.16 | 500 | 0.0013 | 0.9901 | 1.0 | 0.9950 | 0.2 | | No log | 1.16 | 500 | 0.0005 | 0.9950 | 1.0 | 0.9975 | 0.2 | | No log | 1.16 | 500 | 0.0036 | 0.9851 | 0.99 | 0.9875 | 0.4 | | No log | 1.16 | 500 | 0.0074 | 0.9515 | 0.98 | 0.9655 | 0.7000 | | No log | 1.16 | 500 | 0.0006 | 1.0 | 0.995 | 0.9975 | 0.6 | | No log | 1.16 | 500 | 0.0006 | 1.0 | 1.0 | 1.0 | 0.5 | | No log | 1.16 | 500 | 0.0031 | 0.995 | 0.995 | 0.995 | 0.4 | | No log | 1.16 | 500 | 0.0026 | 1.0 | 0.995 | 0.9975 | 0.9 | | No log | 1.16 | 500 | 0.0114 | 0.9606 | 0.975 | 0.9677 | 0.058 | | No log | 1.16 | 500 | 0.1482 | 0.3652 | 0.535 | 0.4341 | 0.7000 | | No log | 1.16 | 500 | 0.0965 | 0.3475 | 0.3379 | 0.3427 | 0.4 | | No log | 1.16 | 500 | 0.1359 | 0.6536 | 0.585 | 0.6174 | 0.5 | | No log | 1.16 | 500 | 0.1299 | 0.5318 | 0.71 | 0.6081 | 0.4 | | No log | 2.33 | 1000 | 0.0482 | 0.9029 | 0.93 | 0.9163 | 0.4 | | No log | 2.33 | 1000 | 0.0161 | 0.8756 | 0.88 | 0.8778 | 0.5 | | No log | 2.33 | 1000 | 0.0393 | 0.8535 | 0.845 | 0.8492 | 0.5 | | No log | 2.33 | 1000 | 0.0172 | 0.8611 | 0.93 | 0.8942 | 0.2 | | No log | 2.33 | 1000 | 0.0335 | 0.9674 | 0.89 | 0.9271 | 0.7000 | | No log | 2.33 | 1000 | 0.0113 | 0.9707 | 0.995 | 0.9827 | 0.3000 | | No log | 2.33 | 1000 | 0.0193 | 0.9196 | 0.9196 | 0.9196 | 0.5 | | No log | 2.33 | 1000 | 0.0124 | 0.9522 | 0.995 | 0.9731 | 0.2 | | No log | 2.33 | 1000 | 0.0173 | 0.9256 | 0.995 | 0.9590 | 0.6 | | No log | 2.33 | 1000 | 0.0501 | 0.8990 | 0.89 | 0.8945 | 0.5 | | No log | 2.33 | 1000 | 0.0116 | 0.9567 | 0.995 | 0.9755 | 0.6 | | No log | 2.33 | 1000 | 0.0142 | 0.9275 | 0.96 | 0.9435 | 0.6 | | No log | 2.33 | 1000 | 0.0100 | 0.9476 | 0.995 | 0.9707 | 0.3000 | | No log | 2.33 | 1000 | 0.0229 | 0.9429 | 0.99 | 0.9659 | 0.7000 | | No log | 2.33 | 1000 | 0.0142 | 0.9565 | 0.99 | 0.9730 | 0.6 | | No log | 2.33 | 1000 | 0.0119 | 0.9469 | 0.98 | 0.9631 | 0.7000 | | No log | 2.33 | 1000 | 0.0159 | 0.9507 | 0.9797 | 0.965 | 0.8 | | No log | 2.33 | 1000 | 0.0122 | 0.9333 | 0.98 | 0.9561 | 0.6 | | No log | 2.33 | 1000 | 0.0525 | 0.8043 | 0.9296 | 0.8625 | 0.4 | | No log | 2.33 | 1000 | 0.0173 | 0.9343 | 0.995 | 0.9637 | 0.5 | | No log | 2.33 | 1000 | 0.0147 | 0.9279 | 0.965 | 0.9461 | 0.3000 | | No log | 2.33 | 1000 | 0.0507 | 0.9590 | 0.935 | 0.9468 | 0.3000 | | No log | 2.33 | 1000 | 0.0081 | 0.9686 | 0.925 | 0.9463 | 0.7000 | | No log | 2.33 | 1000 | 0.0071 | 0.9847 | 0.9797 | 0.9822 | 0.3000 | | No log | 2.33 | 1000 | 0.0211 | 0.9431 | 0.995 | 0.9684 | 0.5 | | No log | 2.33 | 1000 | 0.0531 | 0.8558 | 0.89 | 0.8725 | 0.3000 | | No log | 2.33 | 1000 | 0.0074 | 0.9023 | 0.9749 | 0.9372 | 0.2 | | No log | 2.33 | 1000 | 0.0215 | 0.9453 | 0.95 | 0.9476 | 0.8 | | No log | 2.33 | 1000 | 0.0223 | 0.9211 | 0.875 | 0.8974 | 0.7000 | | No log | 2.33 | 1000 | 0.0168 | 0.9474 | 0.99 | 0.9682 | 0.2 | | No log | 2.33 | 1000 | 0.0409 | 0.9028 | 0.975 | 0.9375 | 0.08 | | No log | 2.33 | 1000 | 0.0324 | 0.9167 | 0.99 | 0.9519 | 0.5 | | No log | 2.33 | 1000 | 0.0135 | 0.95 | 0.95 | 0.9500 | 0.4 | | No log | 2.33 | 1000 | 0.0118 | 0.9707 | 0.995 | 0.9827 | 0.4 | | No log | 2.33 | 1000 | 0.0108 | 0.9657 | 0.985 | 0.9752 | 0.4 | | No log | 2.33 | 1000 | 0.1405 | 0.6719 | 0.86 | 0.7544 | 0.069 | | No log | 2.33 | 1000 | 0.0100 | 0.9333 | 0.98 | 0.9561 | 0.2 | | No log | 2.33 | 1000 | 0.0229 | 0.9847 | 0.965 | 0.9747 | 0.4 | | No log | 2.33 | 1000 | 0.1245 | 0.7403 | 0.5729 | 0.6459 | 0.2 | | No log | 2.33 | 1000 | 0.0444 | 0.9676 | 0.895 | 0.9299 | 0.4 | | No log | 2.33 | 1000 | 0.0591 | 0.8551 | 0.885 | 0.8698 | 0.7000 | | No log | 2.33 | 1000 | 0.0098 | 0.9333 | 0.98 | 0.9561 | 0.093 | | No log | 2.33 | 1000 | 0.0152 | 0.9802 | 0.99 | 0.9851 | 0.5 | | No log | 2.33 | 1000 | 0.0075 | 0.9559 | 0.9799 | 0.9677 | 0.2 | | No log | 2.33 | 1000 | 0.0069 | 0.965 | 0.965 | 0.965 | 0.7000 | | No log | 2.33 | 1000 | 0.0059 | 0.9742 | 0.945 | 0.9594 | 0.9 | | No log | 2.33 | 1000 | 0.0102 | 0.9524 | 1.0 | 0.9756 | 0.2 | | No log | 2.33 | 1000 | 0.0362 | 0.8843 | 0.955 | 0.9183 | 0.5 | | No log | 2.33 | 1000 | 0.0179 | 0.985 | 0.985 | 0.985 | 0.4 | | No log | 2.33 | 1000 | 0.0233 | 0.97 | 0.97 | 0.97 | 0.8 | | No log | 2.33 | 1000 | 0.0235 | 0.9384 | 0.99 | 0.9635 | 0.8 | | No log | 2.33 | 1000 | 0.0072 | 0.9135 | 0.9645 | 0.9383 | 0.8 | | No log | 2.33 | 1000 | 0.1403 | 0.5830 | 0.72 | 0.6443 | 0.042 | | No log | 2.33 | 1000 | 0.0669 | 0.8776 | 0.86 | 0.8687 | 0.4 | | No log | 2.33 | 1000 | 0.0165 | 0.9387 | 0.995 | 0.9660 | 0.5 | | No log | 2.33 | 1000 | 0.0159 | 0.9569 | 1.0 | 0.9780 | 0.4 | | No log | 2.33 | 1000 | 0.1128 | 0.8093 | 0.87 | 0.8386 | 0.2 | | No log | 2.33 | 1000 | 0.0106 | 0.9567 | 0.995 | 0.9755 | 0.5 | | No log | 2.33 | 1000 | 0.1037 | 0.8860 | 0.855 | 0.8702 | 0.4 | | No log | 2.33 | 1000 | 0.0111 | 0.9479 | 1.0 | 0.9732 | 0.5 | | No log | 2.33 | 1000 | 0.0092 | 0.985 | 0.985 | 0.985 | 0.7000 | | No log | 2.33 | 1000 | 0.0108 | 0.9415 | 0.965 | 0.9531 | 0.8 | | No log | 2.33 | 1000 | 0.0484 | 0.8597 | 0.95 | 0.9026 | 0.3000 | | No log | 2.33 | 1000 | 0.0066 | 0.9615 | 1.0 | 0.9804 | 0.3000 | | No log | 2.33 | 1000 | 0.0126 | 0.9346 | 1.0 | 0.9662 | 0.2 | | No log | 2.33 | 1000 | 0.0110 | 0.9561 | 0.98 | 0.9679 | 0.8 | | No log | 2.33 | 1000 | 0.0167 | 0.9567 | 0.995 | 0.9755 | 0.2 | | No log | 2.33 | 1000 | 0.0102 | 0.9660 | 0.995 | 0.9803 | 0.6 | | No log | 2.33 | 1000 | 0.0339 | 0.9278 | 0.835 | 0.8789 | 0.5 | | No log | 2.33 | 1000 | 0.0603 | 0.895 | 0.895 | 0.895 | 0.5 | | No log | 2.33 | 1000 | 0.0112 | 0.8986 | 0.975 | 0.9353 | 0.2 | | No log | 2.33 | 1000 | 0.0734 | 0.8068 | 0.835 | 0.8206 | 0.6 | | No log | 2.33 | 1000 | 0.0133 | 0.9657 | 0.985 | 0.9752 | 0.8 | | No log | 2.33 | 1000 | 0.0287 | 0.9120 | 0.985 | 0.9471 | 0.6 | | No log | 2.33 | 1000 | 0.0212 | 0.9016 | 0.825 | 0.8616 | 0.7000 | | No log | 2.33 | 1000 | 0.0074 | 0.9187 | 0.96 | 0.9389 | 0.3000 | | No log | 2.33 | 1000 | 0.0222 | 0.9426 | 0.985 | 0.9633 | 0.5 | | No log | 2.33 | 1000 | 0.0164 | 0.9265 | 0.945 | 0.9356 | 0.5 | | No log | 2.33 | 1000 | 0.0918 | 0.8725 | 0.89 | 0.8812 | 0.08 | | No log | 2.33 | 1000 | 0.0299 | 0.8814 | 0.855 | 0.8680 | 0.6 | | No log | 2.33 | 1000 | 0.0755 | 0.7364 | 0.8182 | 0.7751 | 0.0100 | | No log | 2.33 | 1000 | 0.0241 | 0.9691 | 0.94 | 0.9543 | 0.5 | | No log | 2.33 | 1000 | 0.1210 | 0.8095 | 0.85 | 0.8293 | 0.4 | | No log | 2.33 | 1000 | 0.0259 | 0.8284 | 0.845 | 0.8366 | 0.4 | | No log | 2.33 | 1000 | 0.0700 | 0.7588 | 0.755 | 0.7569 | 0.5 | | No log | 2.33 | 1000 | 0.0275 | 0.8148 | 0.88 | 0.8462 | 0.3000 | | No log | 2.33 | 1000 | 0.0752 | 0.8269 | 0.86 | 0.8431 | 0.4 | | No log | 2.33 | 1000 | 0.1015 | 0.7444 | 0.83 | 0.7849 | 0.5 | | No log | 2.33 | 1000 | 0.0618 | 0.5965 | 0.5126 | 0.5514 | 0.4 | | No log | 2.33 | 1000 | 0.0629 | 0.7198 | 0.925 | 0.8096 | 0.3000 | | No log | 2.33 | 1000 | 0.0664 | 0.7778 | 0.84 | 0.8077 | 0.5 | | No log | 2.33 | 1000 | 0.1030 | 0.7455 | 0.835 | 0.7877 | 0.3000 | | No log | 2.33 | 1000 | 0.0470 | 0.8104 | 0.855 | 0.8321 | 0.5 | | No log | 2.33 | 1000 | 0.0352 | 0.8267 | 0.835 | 0.8308 | 0.5 | | No log | 2.33 | 1000 | 0.0719 | 0.7871 | 0.795 | 0.7910 | 0.6 | | No log | 2.33 | 1000 | 0.0846 | 0.8009 | 0.885 | 0.8409 | 0.5 | | No log | 2.33 | 1000 | 0.0700 | 0.8341 | 0.955 | 0.8904 | 0.4 | | No log | 2.33 | 1000 | 0.0590 | 0.7455 | 0.82 | 0.7810 | 0.5 | | No log | 2.33 | 1000 | 0.0466 | 0.7569 | 0.8291 | 0.7914 | 0.5 | | No log | 2.33 | 1000 | 0.0626 | 0.8023 | 0.71 | 0.7533 | 0.6 | | No log | 2.33 | 1000 | 0.0804 | 0.6883 | 0.85 | 0.7606 | 0.4 | | No log | 2.33 | 1000 | 0.0626 | 0.8182 | 0.855 | 0.8362 | 0.5 | | No log | 2.33 | 1000 | 0.0485 | 0.7466 | 0.825 | 0.7838 | 0.3000 | | No log | 2.33 | 1000 | 0.1584 | 0.7328 | 0.905 | 0.8098 | 0.09 | | No log | 2.33 | 1000 | 0.0267 | 0.7467 | 0.855 | 0.7972 | 0.3000 | | No log | 2.33 | 1000 | 0.0450 | 0.8864 | 0.7919 | 0.8365 | 0.4 | | No log | 2.33 | 1000 | 0.0843 | 0.7887 | 0.84 | 0.8136 | 0.5 | | No log | 2.33 | 1000 | 0.0776 | 0.7149 | 0.815 | 0.7617 | 0.3000 | | No log | 2.33 | 1000 | 0.0270 | 0.8081 | 0.695 | 0.7473 | 0.5 | | No log | 2.33 | 1000 | 0.0923 | 0.7210 | 0.84 | 0.7760 | 0.5 | | No log | 2.33 | 1000 | 0.0365 | 0.7843 | 0.8 | 0.7921 | 0.4 | | No log | 2.33 | 1000 | 0.0772 | 0.7294 | 0.795 | 0.7608 | 0.4 | | No log | 2.33 | 1000 | 0.1092 | 0.7762 | 0.815 | 0.7951 | 0.4 | | No log | 2.33 | 1000 | 0.1446 | 0.6957 | 0.8 | 0.7442 | 0.5 | | No log | 2.33 | 1000 | 0.0761 | 0.7773 | 0.82 | 0.7981 | 0.3000 | | No log | 2.33 | 1000 | 0.0788 | 0.8235 | 0.77 | 0.7959 | 0.6 | | No log | 2.33 | 1000 | 0.0681 | 0.8254 | 0.78 | 0.8021 | 0.6 | | No log | 2.33 | 1000 | 0.3386 | 0.5075 | 0.845 | 0.6341 | 0.004 | | No log | 2.33 | 1000 | 0.0793 | 0.4590 | 0.56 | 0.5045 | 0.3000 | | No log | 2.33 | 1000 | 0.1210 | 0.8889 | 0.8 | 0.8421 | 0.5 | | No log | 2.33 | 1000 | 0.1663 | 0.5159 | 0.4070 | 0.4551 | 0.2 | | No log | 2.33 | 1000 | 0.0782 | 0.8696 | 0.8 | 0.8333 | 0.4 | | No log | 2.33 | 1000 | 0.0767 | 0.7957 | 0.74 | 0.7668 | 0.7000 | | No log | 2.33 | 1000 | 0.0307 | 0.8565 | 0.895 | 0.8753 | 0.2 | | No log | 2.33 | 1000 | 0.0933 | 0.8222 | 0.925 | 0.8706 | 0.3000 | | No log | 2.33 | 1000 | 0.0358 | 0.7887 | 0.8442 | 0.8155 | 0.4 | | No log | 2.33 | 1000 | 0.0374 | 0.8098 | 0.745 | 0.7760 | 0.5 | | No log | 2.33 | 1000 | 0.0426 | 0.6287 | 0.745 | 0.6819 | 0.4 | | No log | 2.33 | 1000 | 0.0786 | 0.8010 | 0.805 | 0.8030 | 0.6 | | No log | 2.33 | 1000 | 0.0604 | 0.6403 | 0.81 | 0.7152 | 0.4 | | No log | 2.33 | 1000 | 0.1137 | 0.8341 | 0.88 | 0.8564 | 0.3000 | | No log | 2.33 | 1000 | 0.0597 | 0.8466 | 0.8 | 0.8226 | 0.7000 | | No log | 2.33 | 1000 | 0.1279 | 0.7358 | 0.78 | 0.7573 | 0.6 | | No log | 2.33 | 1000 | 0.0207 | 0.7286 | 0.7688 | 0.7482 | 0.6 | | No log | 2.33 | 1000 | 0.2719 | 0.1583 | 0.65 | 0.2547 | 0.001 | | No log | 2.33 | 1000 | 0.1247 | 0.7031 | 0.805 | 0.7506 | 0.3000 | | No log | 2.33 | 1000 | 0.0673 | 0.7907 | 0.85 | 0.8193 | 0.6 | | No log | 2.33 | 1000 | 0.0695 | 0.8925 | 0.83 | 0.8601 | 0.7000 | | No log | 2.33 | 1000 | 0.1080 | 0.8382 | 0.725 | 0.7775 | 0.4 | | No log | 2.33 | 1000 | 0.0707 | 0.7644 | 0.86 | 0.8094 | 0.5 | | No log | 2.33 | 1000 | 0.1183 | 0.8160 | 0.865 | 0.8398 | 0.3000 | | No log | 2.33 | 1000 | 0.0533 | 0.7331 | 0.92 | 0.8160 | 0.4 | | No log | 2.33 | 1000 | 0.0799 | 0.7981 | 0.85 | 0.8232 | 0.5 | | No log | 2.33 | 1000 | 0.0235 | 0.8247 | 0.8 | 0.8122 | 0.6 | | No log | 2.33 | 1000 | 0.0983 | 0.7156 | 0.755 | 0.7348 | 0.5 | | No log | 2.33 | 1000 | 0.0497 | 0.7900 | 0.865 | 0.8258 | 0.6 | | No log | 2.33 | 1000 | 0.0729 | 0.8020 | 0.81 | 0.8060 | 0.5 | | No log | 2.33 | 1000 | 0.0488 | 0.7810 | 0.82 | 0.8 | 0.5 | | No log | 2.33 | 1000 | 0.1031 | 0.7548 | 0.785 | 0.7696 | 0.5 | | No log | 2.33 | 1000 | 0.0457 | 0.8706 | 0.875 | 0.8728 | 0.6 | | No log | 2.33 | 1000 | 0.0754 | 0.7333 | 0.605 | 0.6630 | 0.4 | | No log | 2.33 | 1000 | 0.1202 | 0.7637 | 0.695 | 0.7277 | 0.5 | | No log | 2.33 | 1000 | 0.0279 | 0.8402 | 0.815 | 0.8274 | 0.4 | | No log | 2.33 | 1000 | 0.0768 | 0.7233 | 0.915 | 0.8079 | 0.4 | | No log | 2.33 | 1000 | 0.0937 | 0.8721 | 0.75 | 0.8065 | 0.7000 | | No log | 2.33 | 1000 | 0.0785 | 0.7313 | 0.83 | 0.7775 | 0.5 | | No log | 2.33 | 1000 | 0.0325 | 0.8060 | 0.81 | 0.8080 | 0.5 | | No log | 2.33 | 1000 | 0.0199 | 0.8017 | 0.93 | 0.8611 | 0.2 | | No log | 2.33 | 1000 | 0.1125 | 0.6667 | 0.86 | 0.7511 | 0.4 | | No log | 2.33 | 1000 | 0.0645 | 0.5805 | 0.685 | 0.6284 | 0.3000 | | No log | 2.33 | 1000 | 0.1485 | 0.7867 | 0.885 | 0.8329 | 0.062 | | No log | 2.33 | 1000 | 0.1208 | 0.4468 | 0.525 | 0.4828 | 0.084 | | No log | 2.33 | 1000 | 0.1144 | 0.4982 | 0.6970 | 0.5811 | 0.0140 | | No log | 2.33 | 1000 | 0.0825 | 0.8977 | 0.79 | 0.8404 | 0.6 | | No log | 2.33 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 | | No log | 2.33 | 1000 | 0.0112 | 0.8235 | 0.8528 | 0.8379 | 0.6 | | No log | 2.33 | 1000 | 0.0041 | 0.9655 | 0.98 | 0.9727 | 0.6 | | No log | 2.33 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.0300 | | No log | 2.33 | 1000 | 0.0032 | 1.0 | 1.0 | 1.0 | 0.1 | | No log | 2.33 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 | | No log | 2.33 | 1000 | 0.0032 | 0.9947 | 1.0 | 0.9973 | 0.008 | | No log | 2.33 | 1000 | 0.0028 | 0.9803 | 0.995 | 0.9876 | 0.4 | | No log | 2.33 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.003 | | No log | 2.33 | 1000 | 0.0015 | 0.9852 | 1.0 | 0.9926 | 0.3000 | | No log | 2.33 | 1000 | 0.0015 | 0.9950 | 1.0 | 0.9975 | 0.3000 | | No log | 2.33 | 1000 | 0.0067 | 0.9899 | 0.98 | 0.9849 | 0.0300 | | No log | 2.33 | 1000 | 0.0163 | 1.0 | 0.925 | 0.9610 | 0.3000 | | No log | 2.33 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.3000 | | No log | 2.33 | 1000 | 0.0146 | 0.9632 | 0.915 | 0.9385 | 0.2 | | No log | 2.33 | 1000 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.3000 | | No log | 2.33 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.006 | | No log | 2.33 | 1000 | 0.0028 | 0.9900 | 0.995 | 0.9925 | 0.2 | | No log | 2.33 | 1000 | 0.0024 | 1.0 | 0.98 | 0.9899 | 0.9 | | No log | 2.33 | 1000 | 0.0029 | 0.9660 | 0.995 | 0.9803 | 0.2 | | No log | 2.33 | 1000 | 0.0379 | 0.9240 | 0.79 | 0.8518 | 0.7000 | | No log | 2.33 | 1000 | 0.0010 | 0.9950 | 1.0 | 0.9975 | 0.075 | | No log | 2.33 | 1000 | 0.0024 | 0.9755 | 0.995 | 0.9851 | 0.046 | | No log | 2.33 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.055 | | No log | 2.33 | 1000 | 0.0007 | 1.0 | 0.995 | 0.9975 | 0.7000 | | No log | 2.33 | 1000 | 0.0018 | 0.9852 | 1.0 | 0.9926 | 0.046 | | No log | 2.33 | 1000 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.2 | | No log | 2.33 | 1000 | 0.0044 | 0.9343 | 0.995 | 0.9637 | 0.2 | | No log | 2.33 | 1000 | 0.0013 | 0.9950 | 1.0 | 0.9975 | 0.9 | | No log | 2.33 | 1000 | 0.0171 | 0.9896 | 0.955 | 0.9720 | 0.055 | | No log | 2.33 | 1000 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.4 | | No log | 2.33 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.041 | | No log | 2.33 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.006 | | No log | 2.33 | 1000 | 0.0021 | 1.0 | 0.995 | 0.9975 | 0.9 | | No log | 2.33 | 1000 | 0.0030 | 0.9755 | 0.995 | 0.9851 | 0.096 | | No log | 2.33 | 1000 | 0.0006 | 0.9950 | 1.0 | 0.9975 | 0.098 | | No log | 2.33 | 1000 | 0.0048 | 0.9792 | 1.0 | 0.9895 | 0.3000 | | No log | 2.33 | 1000 | 0.0061 | 0.9265 | 0.945 | 0.9356 | 0.2 | | No log | 2.33 | 1000 | 0.0084 | 0.9606 | 0.975 | 0.9677 | 0.8 | | No log | 2.33 | 1000 | 0.0022 | 0.9900 | 0.995 | 0.9925 | 0.5 | | No log | 2.33 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 | | No log | 2.33 | 1000 | 0.0017 | 1.0 | 0.99 | 0.9950 | 0.8 | | No log | 2.33 | 1000 | 0.0019 | 0.9949 | 0.985 | 0.9899 | 0.2 | | No log | 2.33 | 1000 | 0.0176 | 0.9279 | 0.965 | 0.9461 | 0.7000 | | No log | 2.33 | 1000 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.5 | | No log | 2.33 | 1000 | 0.0027 | 0.99 | 0.99 | 0.99 | 0.7000 | | No log | 2.33 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.002 | | No log | 2.33 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.0600 | | No log | 2.33 | 1000 | 0.0110 | 0.9512 | 0.975 | 0.9630 | 0.2 | | No log | 2.33 | 1000 | 0.0007 | 0.9917 | 1.0 | 0.9959 | 0.0180 | | No log | 2.33 | 1000 | 0.0203 | 0.8679 | 0.92 | 0.8932 | 0.2 | | No log | 2.33 | 1000 | 0.0077 | 1.0 | 1.0 | 1.0 | 0.0180 | | No log | 2.33 | 1000 | 0.0008 | 0.995 | 0.995 | 0.995 | 0.7000 | | No log | 2.33 | 1000 | 0.0011 | 0.9901 | 1.0 | 0.9950 | 0.031 | | No log | 2.33 | 1000 | 0.0014 | 1.0 | 0.995 | 0.9975 | 0.8 | | No log | 2.33 | 1000 | 0.0124 | 0.7647 | 0.975 | 0.8571 | 0.029 | | No log | 2.33 | 1000 | 0.0161 | 0.9706 | 0.99 | 0.9802 | 0.004 | | No log | 2.33 | 1000 | 0.0085 | 0.9844 | 0.945 | 0.9643 | 0.7000 | | No log | 2.33 | 1000 | 0.0194 | 0.6805 | 0.8325 | 0.7489 | 0.5 | | No log | 2.33 | 1000 | 0.0138 | 0.7895 | 0.9 | 0.8411 | 0.4 | | No log | 2.33 | 1000 | 0.0642 | 0.7644 | 0.86 | 0.8094 | 0.4 | | No log | 2.33 | 1000 | 0.1520 | 0.6308 | 0.7321 | 0.6777 | 0.1 | | No log | 2.33 | 1000 | 0.0088 | 0.9282 | 0.905 | 0.9165 | 0.6 | | No log | 2.33 | 1000 | 0.1030 | 0.7585 | 0.8351 | 0.7949 | 0.3000 | | No log | 2.33 | 1000 | 0.0447 | 0.7709 | 0.875 | 0.8197 | 0.3000 | | No log | 2.33 | 1000 | 0.0387 | 0.8944 | 0.72 | 0.7978 | 0.6 | | No log | 2.33 | 1000 | 0.0374 | 0.8622 | 0.845 | 0.8535 | 0.4 | | No log | 2.33 | 1000 | 0.0402 | 0.8251 | 0.92 | 0.8700 | 0.3000 | | No log | 2.33 | 1000 | 0.0295 | 0.8325 | 0.845 | 0.8387 | 0.3000 | | No log | 2.33 | 1000 | 0.0483 | 0.7557 | 0.835 | 0.7933 | 0.3000 | | No log | 2.33 | 1000 | 0.0303 | 0.8235 | 0.7 | 0.7568 | 0.6 | | No log | 2.33 | 1000 | 0.0575 | 0.7033 | 0.735 | 0.7188 | 0.4 | | No log | 2.33 | 1000 | 0.0626 | 0.7357 | 0.835 | 0.7822 | 0.3000 | | No log | 2.33 | 1000 | 0.0070 | 0.9567 | 0.995 | 0.9755 | 0.0600 | | No log | 2.33 | 1000 | 0.0513 | 0.7027 | 0.78 | 0.7393 | 0.4 | | No log | 2.33 | 1000 | 0.0334 | 0.8333 | 0.725 | 0.7754 | 0.6 | | No log | 2.33 | 1000 | 0.0387 | 0.7880 | 0.7323 | 0.7592 | 0.5 | | No log | 2.33 | 1000 | 0.0651 | 0.6385 | 0.8342 | 0.7233 | 0.3000 | | No log | 2.33 | 1000 | 0.0339 | 0.7309 | 0.815 | 0.7707 | 0.3000 | | No log | 2.33 | 1000 | 0.0249 | 0.8195 | 0.84 | 0.8296 | 0.4 | | No log | 2.33 | 1000 | 0.0097 | 0.9585 | 0.925 | 0.9415 | 0.6 | | No log | 2.33 | 1000 | 0.0400 | 0.6824 | 0.795 | 0.7344 | 0.4 | | No log | 2.33 | 1000 | 0.0404 | 0.7205 | 0.825 | 0.7692 | 0.3000 | | No log | 2.33 | 1000 | 0.0426 | 0.8603 | 0.77 | 0.8127 | 0.6 | | No log | 2.33 | 1000 | 0.0329 | 0.7158 | 0.68 | 0.6974 | 0.5 | | No log | 2.33 | 1000 | 0.0396 | 0.7182 | 0.79 | 0.7524 | 0.4 | | No log | 2.33 | 1000 | 0.0787 | 0.7767 | 0.835 | 0.8048 | 0.021 | | No log | 2.33 | 1000 | 0.0846 | 0.5947 | 0.785 | 0.6767 | 0.2 | | No log | 2.33 | 1000 | 0.0218 | 0.9843 | 0.94 | 0.9616 | 0.3000 | | No log | 2.33 | 1000 | 0.0011 | 1.0 | 1.0 | 1.0 | 0.0530 | | No log | 2.33 | 1000 | 0.0194 | 0.9317 | 0.955 | 0.9432 | 0.3000 | | No log | 2.33 | 1000 | 0.0526 | 0.7186 | 0.715 | 0.7168 | 0.4 | | No log | 2.33 | 1000 | 0.0564 | 0.7287 | 0.685 | 0.7062 | 0.5 | | No log | 2.33 | 1000 | 0.1391 | 0.4932 | 0.7660 | 0.6 | 0.2 | | No log | 2.33 | 1000 | 0.0373 | 0.6570 | 0.565 | 0.6075 | 0.4 | | No log | 2.33 | 1000 | 0.0273 | 0.8913 | 0.82 | 0.8542 | 0.6 | | No log | 2.33 | 1000 | 0.0363 | 0.8 | 0.94 | 0.8644 | 0.3000 | | No log | 2.33 | 1000 | 0.0116 | 0.9505 | 0.96 | 0.9552 | 0.4 | | No log | 2.33 | 1000 | 0.0418 | 0.8267 | 0.835 | 0.8308 | 0.4 | | No log | 2.33 | 1000 | 0.0349 | 0.6604 | 0.7 | 0.6796 | 0.5 | | No log | 2.33 | 1000 | 0.0835 | 0.5766 | 0.715 | 0.6384 | 0.3000 | | No log | 2.33 | 1000 | 0.0312 | 0.8812 | 0.89 | 0.8856 | 0.4 | | No log | 2.33 | 1000 | 0.0281 | 0.8432 | 0.78 | 0.8104 | 0.6 | | No log | 2.33 | 1000 | 0.0141 | 0.9062 | 0.9667 | 0.9355 | 0.3000 | | No log | 2.33 | 1000 | 0.0185 | 0.8491 | 0.9 | 0.8738 | 0.4 | | No log | 2.33 | 1000 | 0.0335 | 0.895 | 0.895 | 0.895 | 0.5 | | No log | 2.33 | 1000 | 0.0101 | 0.8527 | 0.9167 | 0.8835 | 0.3000 | | No log | 2.33 | 1000 | 0.0231 | 0.8788 | 0.87 | 0.8744 | 0.4 | | No log | 2.33 | 1000 | 0.0662 | 0.7982 | 0.89 | 0.8416 | 0.2 | | No log | 2.33 | 1000 | 0.0556 | 0.6634 | 0.68 | 0.6716 | 0.4 | | No log | 2.33 | 1000 | 0.0037 | 0.9899 | 0.985 | 0.9875 | 0.7000 | | No log | 2.33 | 1000 | 0.0947 | 0.6066 | 0.64 | 0.6229 | 0.4 | | No log | 2.33 | 1000 | 0.0423 | 0.6765 | 0.345 | 0.4570 | 0.4 | | No log | 2.33 | 1000 | 0.0520 | 0.7445 | 0.845 | 0.7916 | 0.048 | | No log | 2.33 | 1000 | 0.1220 | 0.5893 | 0.825 | 0.6875 | 0.2 | | No log | 2.33 | 1000 | 0.0893 | 0.5564 | 0.74 | 0.6352 | 0.2 | | No log | 2.33 | 1000 | 0.0788 | 0.5441 | 0.74 | 0.6271 | 0.1 | | No log | 2.33 | 1000 | 0.1397 | 0.7012 | 0.845 | 0.7664 | 0.029 | | No log | 2.33 | 1000 | 0.0692 | 0.6537 | 0.755 | 0.7007 | 0.4 | | No log | 2.33 | 1000 | 0.0555 | 0.7366 | 0.755 | 0.7457 | 0.4 | | No log | 2.33 | 1000 | 0.0507 | 0.4079 | 0.5678 | 0.4748 | 0.2 | | No log | 2.33 | 1000 | 0.0647 | 0.6858 | 0.775 | 0.7277 | 0.4 | | No log | 2.33 | 1000 | 0.0790 | 0.5278 | 0.6909 | 0.5984 | 0.3000 | | No log | 2.33 | 1000 | 0.0619 | 0.7569 | 0.825 | 0.7895 | 0.2 | | No log | 2.33 | 1000 | 0.0619 | 0.7569 | 0.825 | 0.7895 | 0.2 | | No log | 2.33 | 1000 | 0.0494 | 0.6971 | 0.84 | 0.7619 | 0.2 | | No log | 2.33 | 1000 | 0.0692 | 0.7667 | 0.805 | 0.7854 | 0.2 | | No log | 2.33 | 1000 | 0.0515 | 0.7121 | 0.705 | 0.7085 | 0.3000 | | No log | 2.33 | 1000 | 0.0540 | 0.7672 | 0.725 | 0.7455 | 0.5 | | No log | 2.33 | 1000 | 0.0970 | 0.6507 | 0.68 | 0.6650 | 0.3000 | | No log | 2.33 | 1000 | 0.0612 | 0.6542 | 0.785 | 0.7136 | 0.3000 | | No log | 2.33 | 1000 | 0.0530 | 0.8090 | 0.805 | 0.8070 | 0.4 | | No log | 2.33 | 1000 | 0.1598 | 0.4868 | 0.74 | 0.5873 | 0.083 | | No log | 2.33 | 1000 | 0.0712 | 0.6926 | 0.8 | 0.7425 | 0.2 | | No log | 2.33 | 1000 | 0.1253 | 0.6612 | 0.81 | 0.7281 | 0.0730 | | No log | 2.33 | 1000 | 0.0589 | 0.7083 | 0.765 | 0.7356 | 0.2 | | No log | 2.33 | 1000 | 0.0589 | 0.7083 | 0.765 | 0.7356 | 0.2 | | No log | 2.33 | 1000 | 0.0371 | 0.5577 | 0.6304 | 0.5918 | 0.4 | | No log | 2.33 | 1000 | 0.0371 | 0.5577 | 0.6304 | 0.5918 | 0.4 | | No log | 2.33 | 1000 | 0.0574 | 0.7419 | 0.69 | 0.7150 | 0.4 | | No log | 2.33 | 1000 | 0.0484 | 0.5146 | 0.44 | 0.4744 | 0.5 | | No log | 2.33 | 1000 | 0.0697 | 0.3333 | 0.8462 | 0.4783 | 0.039 | | No log | 2.33 | 1000 | 0.0506 | 0.5929 | 0.75 | 0.6623 | 0.4 | | No log | 2.33 | 1000 | 0.0465 | 0.5194 | 0.735 | 0.6087 | 0.3000 | | No log | 2.33 | 1000 | 0.0621 | 0.7081 | 0.74 | 0.7237 | 0.3000 | | No log | 2.33 | 1000 | 0.0501 | 0.6638 | 0.76 | 0.7086 | 0.5 | | No log | 2.33 | 1000 | 0.0734 | 0.6624 | 0.775 | 0.7143 | 0.3000 | | No log | 2.33 | 1000 | 0.1130 | 0.2485 | 0.3942 | 0.3048 | 0.2 | | No log | 2.33 | 1000 | 0.1037 | 0.6162 | 0.875 | 0.7231 | 0.081 | | No log | 2.33 | 1000 | 0.0662 | 0.7014 | 0.74 | 0.7202 | 0.5 | | No log | 2.33 | 1000 | 0.2300 | 0.4578 | 0.84 | 0.5926 | 0.001 | | No log | 2.33 | 1000 | 0.0986 | 0.7130 | 0.82 | 0.7628 | 0.6 | | No log | 2.33 | 1000 | 0.0816 | 0.6024 | 0.75 | 0.6682 | 0.3000 | | No log | 2.33 | 1000 | 0.1545 | 0.7333 | 0.88 | 0.8 | 0.0360 | | No log | 2.33 | 1000 | 0.1476 | 1.0 | 0.18 | 0.3051 | 0.9 | | No log | 2.33 | 1000 | 0.0761 | 0.4780 | 0.49 | 0.4840 | 0.2 | | No log | 2.33 | 1000 | 0.0633 | 0.7951 | 0.97 | 0.8739 | 0.4 | | No log | 2.33 | 1000 | 0.0513 | 0.4805 | 0.555 | 0.5151 | 0.075 | | No log | 2.33 | 1000 | 0.0692 | 0.7488 | 0.79 | 0.7689 | 0.3000 | | No log | 2.33 | 1000 | 0.1045 | 0.45 | 0.75 | 0.5625 | 0.2 | | No log | 2.33 | 1000 | 0.0675 | 0.6431 | 0.82 | 0.7209 | 0.2 | | No log | 2.33 | 1000 | 0.0576 | 0.8286 | 0.725 | 0.7733 | 0.5 | | No log | 2.33 | 1000 | 0.0852 | 0.6154 | 0.56 | 0.5864 | 0.3000 | | No log | 2.33 | 1000 | 0.0610 | 0.6851 | 0.805 | 0.7402 | 0.3000 | | No log | 2.33 | 1000 | 0.0549 | 0.5302 | 0.57 | 0.5494 | 0.5 | | No log | 2.33 | 1000 | 0.0420 | 0.8195 | 0.84 | 0.8296 | 0.2 | | No log | 2.33 | 1000 | 0.1163 | 0.3682 | 0.51 | 0.4277 | 0.5 | | No log | 2.33 | 1000 | 0.0596 | 0.6808 | 0.8894 | 0.7712 | 0.2 | | No log | 2.33 | 1000 | 0.0977 | 0.5098 | 0.5049 | 0.5073 | 0.2 | | No log | 2.33 | 1000 | 0.1292 | 0.8 | 0.2 | 0.3200 | 0.9 | | No log | 2.33 | 1000 | 0.0545 | 0.7654 | 0.685 | 0.7230 | 0.4 | | No log | 2.33 | 1000 | 0.0783 | 0.7742 | 0.84 | 0.8058 | 0.2 | | No log | 2.33 | 1000 | 0.0783 | 0.7742 | 0.84 | 0.8058 | 0.2 | | No log | 2.33 | 1000 | 0.0546 | 0.5721 | 0.635 | 0.6019 | 0.2 | | No log | 2.33 | 1000 | 0.0774 | 0.7523 | 0.8434 | 0.7952 | 0.3000 | | No log | 2.33 | 1000 | 0.0532 | 0.6798 | 0.6935 | 0.6866 | 0.5 | | No log | 2.33 | 1000 | 0.0616 | 0.6958 | 0.835 | 0.7591 | 0.4 | | No log | 2.33 | 1000 | 0.0605 | 0.7243 | 0.775 | 0.7488 | 0.3000 | | No log | 2.33 | 1000 | 0.0477 | 0.7441 | 0.785 | 0.7640 | 0.3000 | | No log | 2.33 | 1000 | 0.0583 | 0.5391 | 0.655 | 0.5914 | 0.4 | | No log | 2.33 | 1000 | 0.0652 | 0.6694 | 0.82 | 0.7371 | 0.3000 | | No log | 2.33 | 1000 | 0.0545 | 0.7457 | 0.865 | 0.8009 | 0.3000 | | No log | 2.33 | 1000 | 0.0491 | 0.7227 | 0.795 | 0.7571 | 0.3000 | | No log | 2.33 | 1000 | 0.0437 | 0.7958 | 0.76 | 0.7775 | 0.4 | | No log | 2.33 | 1000 | 0.0882 | 0.6113 | 0.81 | 0.6968 | 0.0870 | | No log | 2.33 | 1000 | 0.0534 | 0.5161 | 0.48 | 0.4974 | 0.3000 | | No log | 2.33 | 1000 | 0.0581 | 0.4615 | 0.69 | 0.5531 | 0.3000 | | No log | 2.33 | 1000 | 0.0704 | 0.6082 | 0.815 | 0.6966 | 0.084 | | No log | 2.33 | 1000 | 0.0619 | 0.4903 | 0.88 | 0.6297 | 0.094 | | No log | 2.33 | 1000 | 0.0652 | 0.7685 | 0.78 | 0.7742 | 0.5 | | No log | 2.33 | 1000 | 0.0486 | 0.7626 | 0.755 | 0.7588 | 0.4 | | No log | 2.33 | 1000 | 0.0939 | 0.5771 | 0.655 | 0.6136 | 0.4 | | No log | 2.33 | 1000 | 0.0792 | 0.6705 | 0.875 | 0.7592 | 0.095 | | No log | 2.33 | 1000 | 0.0542 | 0.5836 | 0.82 | 0.6819 | 0.3000 | | No log | 2.33 | 1000 | 0.0542 | 0.5836 | 0.82 | 0.6819 | 0.3000 | | No log | 2.33 | 1000 | 0.0542 | 0.5836 | 0.82 | 0.6819 | 0.3000 | | No log | 2.33 | 1000 | 0.0542 | 0.5836 | 0.82 | 0.6819 | 0.3000 | | No log | 2.33 | 1000 | 0.2282 | 0.2808 | 0.6985 | 0.4006 | 0.001 | | No log | 2.33 | 1000 | 0.0883 | 0.5563 | 0.8191 | 0.6626 | 0.047 | | No log | 2.33 | 1000 | 0.0163 | 0.9552 | 0.96 | 0.9576 | 0.4 | | No log | 2.33 | 1000 | 0.0019 | 0.9901 | 1.0 | 0.9950 | 0.2 | | No log | 2.33 | 1000 | 0.0031 | 1.0 | 0.99 | 0.9950 | 0.2 | | No log | 2.33 | 1000 | 0.0004 | 0.9950 | 1.0 | 0.9975 | 0.033 | | No log | 2.33 | 1000 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.2 | | No log | 2.33 | 1000 | 0.0009 | 0.9950 | 1.0 | 0.9975 | 0.3000 | | No log | 2.33 | 1000 | 0.0021 | 1.0 | 0.995 | 0.9975 | 0.7000 | | No log | 2.33 | 1000 | 0.0036 | 0.9950 | 0.99 | 0.9925 | 0.8 | | No log | 2.33 | 1000 | 0.0031 | 0.9949 | 0.98 | 0.9874 | 0.3000 | | No log | 2.33 | 1000 | 0.0023 | 1.0 | 0.99 | 0.9950 | 0.2 | | No log | 2.33 | 1000 | 0.0240 | 0.9394 | 0.93 | 0.9347 | 0.056 | | No log | 2.33 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 | | No log | 2.33 | 1000 | 0.0491 | 0.7737 | 0.735 | 0.7538 | 0.2 | | No log | 2.33 | 1000 | 0.0015 | 0.9950 | 1.0 | 0.9975 | 0.9 | | No log | 2.33 | 1000 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.2 | | No log | 2.33 | 1000 | 0.0038 | 0.9950 | 0.99 | 0.9925 | 0.5 | | No log | 2.33 | 1000 | 0.0064 | 0.9653 | 0.975 | 0.9701 | 0.7000 | | No log | 2.33 | 1000 | 0.0007 | 1.0 | 0.995 | 0.9975 | 0.4 | | No log | 2.33 | 1000 | 0.0006 | 1.0 | 0.995 | 0.9975 | 0.5 | | No log | 2.33 | 1000 | 0.0026 | 1.0 | 0.995 | 0.9975 | 0.6 | | No log | 2.33 | 1000 | 0.0014 | 1.0 | 0.995 | 0.9975 | 0.4 | | No log | 2.33 | 1000 | 0.0188 | 0.9453 | 0.95 | 0.9476 | 0.0220 | | No log | 2.33 | 1000 | 0.1421 | 0.3405 | 0.635 | 0.4433 | 0.6 | | No log | 2.33 | 1000 | 0.1039 | 0.3874 | 0.2966 | 0.3359 | 0.3000 | | No log | 2.33 | 1000 | 0.1373 | 0.5720 | 0.695 | 0.6275 | 0.3000 | | No log | 2.33 | 1000 | 0.1261 | 0.5099 | 0.775 | 0.6151 | 0.3000 | | No log | 3.49 | 1500 | 0.0480 | 0.9722 | 0.875 | 0.9211 | 0.7000 | | No log | 3.49 | 1500 | 0.0167 | 0.88 | 0.88 | 0.88 | 0.3000 | | No log | 3.49 | 1500 | 0.0398 | 0.8424 | 0.855 | 0.8486 | 0.4 | | No log | 3.49 | 1500 | 0.0178 | 0.9082 | 0.89 | 0.8990 | 0.3000 | | No log | 3.49 | 1500 | 0.0352 | 0.9674 | 0.89 | 0.9271 | 0.6 | | No log | 3.49 | 1500 | 0.0110 | 0.9755 | 0.995 | 0.9851 | 0.6 | | No log | 3.49 | 1500 | 0.0183 | 0.9038 | 0.9447 | 0.9238 | 0.3000 | | No log | 3.49 | 1500 | 0.0126 | 0.9479 | 1.0 | 0.9732 | 0.038 | | No log | 3.49 | 1500 | 0.0159 | 0.9299 | 0.995 | 0.9614 | 0.6 | | No log | 3.49 | 1500 | 0.0467 | 0.9055 | 0.91 | 0.9077 | 0.3000 | | No log | 3.49 | 1500 | 0.0111 | 0.9522 | 0.995 | 0.9731 | 0.3000 | | No log | 3.49 | 1500 | 0.0136 | 0.9453 | 0.95 | 0.9476 | 0.7000 | | No log | 3.49 | 1500 | 0.0084 | 0.9612 | 0.99 | 0.9754 | 0.5 | | No log | 3.49 | 1500 | 0.0218 | 0.9387 | 0.995 | 0.9660 | 0.5 | | No log | 3.49 | 1500 | 0.0151 | 0.9474 | 0.99 | 0.9682 | 0.3000 | | No log | 3.49 | 1500 | 0.0115 | 0.9515 | 0.98 | 0.9655 | 0.7000 | | No log | 3.49 | 1500 | 0.0122 | 0.9602 | 0.9797 | 0.9698 | 0.8 | | No log | 3.49 | 1500 | 0.0110 | 0.9327 | 0.97 | 0.9510 | 0.7000 | | No log | 3.49 | 1500 | 0.0513 | 0.8214 | 0.9246 | 0.8700 | 0.4 | | No log | 3.49 | 1500 | 0.0172 | 0.9259 | 1.0 | 0.9615 | 0.047 | | No log | 3.49 | 1500 | 0.0158 | 0.9234 | 0.965 | 0.9438 | 0.2 | | No log | 3.49 | 1500 | 0.0618 | 0.9588 | 0.93 | 0.9442 | 0.2 | | No log | 3.49 | 1500 | 0.0082 | 0.9641 | 0.94 | 0.9519 | 0.6 | | No log | 3.49 | 1500 | 0.0067 | 0.9701 | 0.9898 | 0.9799 | 0.081 | | No log | 3.49 | 1500 | 0.0213 | 0.9469 | 0.98 | 0.9631 | 0.5 | | No log | 3.49 | 1500 | 0.0517 | 0.8683 | 0.89 | 0.8790 | 0.3000 | | No log | 3.49 | 1500 | 0.0074 | 0.9143 | 0.9648 | 0.9389 | 0.2 | | No log | 3.49 | 1500 | 0.0201 | 0.9458 | 0.96 | 0.9529 | 0.7000 | | No log | 3.49 | 1500 | 0.0204 | 0.9015 | 0.915 | 0.9082 | 0.5 | | No log | 3.49 | 1500 | 0.0162 | 0.9474 | 0.99 | 0.9682 | 0.2 | | No log | 3.49 | 1500 | 0.0442 | 0.9104 | 0.965 | 0.9369 | 0.055 | | No log | 3.49 | 1500 | 0.0292 | 0.9286 | 0.975 | 0.9512 | 0.6 | | No log | 3.49 | 1500 | 0.0165 | 0.9548 | 0.95 | 0.9524 | 0.2 | | No log | 3.49 | 1500 | 0.0125 | 0.9707 | 0.995 | 0.9827 | 0.3000 | | No log | 3.49 | 1500 | 0.0116 | 0.9565 | 0.99 | 0.9730 | 0.2 | | No log | 3.49 | 1500 | 0.1678 | 0.6504 | 0.865 | 0.7425 | 0.028 | | No log | 3.49 | 1500 | 0.0102 | 0.9420 | 0.975 | 0.9582 | 0.4 | | No log | 3.49 | 1500 | 0.0251 | 0.9657 | 0.985 | 0.9752 | 0.047 | | No log | 3.49 | 1500 | 0.1524 | 0.6648 | 0.6080 | 0.6352 | 0.0530 | | No log | 3.49 | 1500 | 0.0483 | 0.9730 | 0.9 | 0.9351 | 0.3000 | | No log | 3.49 | 1500 | 0.0602 | 0.8271 | 0.885 | 0.8551 | 0.6 | | No log | 3.49 | 1500 | 0.0118 | 0.9557 | 0.97 | 0.9628 | 0.08 | | No log | 3.49 | 1500 | 0.0151 | 0.9755 | 0.995 | 0.9851 | 0.3000 | | No log | 3.49 | 1500 | 0.0077 | 0.9602 | 0.9698 | 0.965 | 0.3000 | | No log | 3.49 | 1500 | 0.0068 | 0.9745 | 0.955 | 0.9646 | 0.8 | | No log | 3.49 | 1500 | 0.0046 | 0.9299 | 0.995 | 0.9614 | 0.4 | | No log | 3.49 | 1500 | 0.0099 | 0.9524 | 1.0 | 0.9756 | 0.084 | | No log | 3.49 | 1500 | 0.0334 | 0.8813 | 0.965 | 0.9212 | 0.4 | | No log | 3.49 | 1500 | 0.0191 | 0.9899 | 0.98 | 0.9849 | 0.6 | | No log | 3.49 | 1500 | 0.0223 | 0.9703 | 0.98 | 0.9751 | 0.5 | | No log | 3.49 | 1500 | 0.0199 | 0.9519 | 0.99 | 0.9706 | 0.7000 | | No log | 3.49 | 1500 | 0.0061 | 0.9310 | 0.9594 | 0.945 | 0.8 | | No log | 3.49 | 1500 | 0.1712 | 0.5345 | 0.735 | 0.6189 | 0.012 | | No log | 3.49 | 1500 | 0.0678 | 0.8901 | 0.85 | 0.8696 | 0.4 | | No log | 3.49 | 1500 | 0.0161 | 0.9431 | 0.995 | 0.9684 | 0.4 | | No log | 3.49 | 1500 | 0.0158 | 0.9569 | 1.0 | 0.9780 | 0.4 | | No log | 3.49 | 1500 | 0.1212 | 0.8492 | 0.845 | 0.8471 | 0.2 | | No log | 3.49 | 1500 | 0.0096 | 0.9660 | 0.995 | 0.9803 | 0.6 | | No log | 3.49 | 1500 | 0.1050 | 0.9364 | 0.81 | 0.8686 | 0.6 | | No log | 3.49 | 1500 | 0.0108 | 0.9476 | 0.995 | 0.9707 | 0.6 | | No log | 3.49 | 1500 | 0.0091 | 0.985 | 0.985 | 0.985 | 0.7000 | | No log | 3.49 | 1500 | 0.0125 | 0.9147 | 0.965 | 0.9392 | 0.8 | | No log | 3.49 | 1500 | 0.0469 | 0.8704 | 0.94 | 0.9038 | 0.3000 | | No log | 3.49 | 1500 | 0.0059 | 0.9615 | 1.0 | 0.9804 | 0.3000 | | No log | 3.49 | 1500 | 0.0115 | 0.9567 | 0.995 | 0.9755 | 0.6 | | No log | 3.49 | 1500 | 0.0099 | 0.9471 | 0.985 | 0.9657 | 0.6 | | No log | 3.49 | 1500 | 0.0162 | 0.9567 | 0.995 | 0.9755 | 0.2 | | No log | 3.49 | 1500 | 0.0098 | 0.9569 | 1.0 | 0.9780 | 0.099 | | No log | 3.49 | 1500 | 0.0349 | 0.8964 | 0.865 | 0.8804 | 0.3000 | | No log | 3.49 | 1500 | 0.0604 | 0.9344 | 0.855 | 0.8930 | 0.6 | | No log | 3.49 | 1500 | 0.0111 | 0.8959 | 0.99 | 0.9406 | 0.092 | | No log | 3.49 | 1500 | 0.0746 | 0.7617 | 0.895 | 0.8230 | 0.4 | | No log | 3.49 | 1500 | 0.0140 | 0.9563 | 0.985 | 0.9704 | 0.5 | | No log | 3.49 | 1500 | 0.0297 | 0.9041 | 0.99 | 0.9451 | 0.2 | | No log | 3.49 | 1500 | 0.0212 | 0.8462 | 0.88 | 0.8627 | 0.5 | | No log | 3.49 | 1500 | 0.0083 | 0.9147 | 0.965 | 0.9392 | 0.2 | | No log | 3.49 | 1500 | 0.0225 | 0.9299 | 0.995 | 0.9614 | 0.0880 | | No log | 3.49 | 1500 | 0.0154 | 0.9091 | 0.95 | 0.9291 | 0.3000 | | No log | 3.49 | 1500 | 0.1113 | 0.8190 | 0.95 | 0.8796 | 0.003 | | No log | 3.49 | 1500 | 0.0302 | 0.8676 | 0.885 | 0.8762 | 0.4 | | No log | 3.49 | 1500 | 0.0898 | 0.7039 | 0.8283 | 0.7610 | 0.003 | | No log | 3.49 | 1500 | 0.0259 | 0.9643 | 0.945 | 0.9545 | 0.3000 | | No log | 3.49 | 1500 | 0.1163 | 0.8392 | 0.835 | 0.8371 | 0.4 | | No log | 3.49 | 1500 | 0.0258 | 0.8653 | 0.835 | 0.8499 | 0.4 | | No log | 3.49 | 1500 | 0.0701 | 0.7381 | 0.775 | 0.7561 | 0.4 | | No log | 3.49 | 1500 | 0.0280 | 0.8783 | 0.83 | 0.8535 | 0.4 | | No log | 3.49 | 1500 | 0.0704 | 0.8128 | 0.89 | 0.8496 | 0.3000 | | No log | 3.49 | 1500 | 0.1002 | 0.7149 | 0.89 | 0.7929 | 0.3000 | | No log | 3.49 | 1500 | 0.0599 | 0.6023 | 0.5176 | 0.5568 | 0.4 | | No log | 3.49 | 1500 | 0.0615 | 0.7925 | 0.84 | 0.8155 | 0.5 | | No log | 3.49 | 1500 | 0.0632 | 0.7652 | 0.88 | 0.8186 | 0.4 | | No log | 3.49 | 1500 | 0.1044 | 0.8533 | 0.785 | 0.8177 | 0.5 | | No log | 3.49 | 1500 | 0.0455 | 0.8047 | 0.865 | 0.8337 | 0.4 | | No log | 3.49 | 1500 | 0.0336 | 0.9029 | 0.79 | 0.8427 | 0.7000 | | No log | 3.49 | 1500 | 0.0697 | 0.7456 | 0.85 | 0.7944 | 0.4 | | No log | 3.49 | 1500 | 0.0795 | 0.8571 | 0.84 | 0.8485 | 0.6 | | No log | 3.49 | 1500 | 0.0691 | 0.8571 | 0.93 | 0.8921 | 0.5 | | No log | 3.49 | 1500 | 0.0575 | 0.7762 | 0.815 | 0.7951 | 0.5 | | No log | 3.49 | 1500 | 0.0437 | 0.8154 | 0.7990 | 0.8071 | 0.6 | | No log | 3.49 | 1500 | 0.0611 | 0.7680 | 0.745 | 0.7563 | 0.5 | | No log | 3.49 | 1500 | 0.0799 | 0.7349 | 0.79 | 0.7614 | 0.5 | | No log | 3.49 | 1500 | 0.0631 | 0.8204 | 0.845 | 0.8325 | 0.5 | | No log | 3.49 | 1500 | 0.0494 | 0.7534 | 0.84 | 0.7943 | 0.2 | | No log | 3.49 | 1500 | 0.1619 | 0.736 | 0.92 | 0.8178 | 0.0710 | | No log | 3.49 | 1500 | 0.0265 | 0.8041 | 0.78 | 0.7919 | 0.4 | | No log | 3.49 | 1500 | 0.0458 | 0.9176 | 0.7919 | 0.8501 | 0.4 | | No log | 3.49 | 1500 | 0.0847 | 0.7511 | 0.875 | 0.8083 | 0.4 | | No log | 3.49 | 1500 | 0.0723 | 0.7560 | 0.79 | 0.7726 | 0.3000 | | No log | 3.49 | 1500 | 0.0270 | 0.7487 | 0.745 | 0.7469 | 0.4 | | No log | 3.49 | 1500 | 0.0900 | 0.6964 | 0.86 | 0.7696 | 0.4 | | No log | 3.49 | 1500 | 0.0357 | 0.7710 | 0.825 | 0.7971 | 0.3000 | | No log | 3.49 | 1500 | 0.0761 | 0.7559 | 0.805 | 0.7797 | 0.4 | | No log | 3.49 | 1500 | 0.1164 | 0.8352 | 0.735 | 0.7819 | 0.5 | | No log | 3.49 | 1500 | 0.1448 | 0.6777 | 0.82 | 0.7421 | 0.4 | | No log | 3.49 | 1500 | 0.0803 | 0.8093 | 0.785 | 0.7970 | 0.3000 | | No log | 3.49 | 1500 | 0.0792 | 0.7478 | 0.845 | 0.7934 | 0.4 | | No log | 3.49 | 1500 | 0.0662 | 0.8019 | 0.83 | 0.8157 | 0.5 | | No log | 3.49 | 1500 | 0.4415 | 0.4857 | 0.85 | 0.6182 | 0.001 | | No log | 3.49 | 1500 | 0.0789 | 0.4529 | 0.625 | 0.5252 | 0.3000 | | No log | 3.49 | 1500 | 0.1216 | 0.9086 | 0.795 | 0.848 | 0.5 | | No log | 3.49 | 1500 | 0.1913 | 0.4205 | 0.3719 | 0.3947 | 0.1 | | No log | 3.49 | 1500 | 0.0793 | 0.8852 | 0.81 | 0.8460 | 0.3000 | | No log | 3.49 | 1500 | 0.0742 | 0.7744 | 0.755 | 0.7646 | 0.6 | | No log | 3.49 | 1500 | 0.0339 | 0.8318 | 0.915 | 0.8714 | 0.0860 | | No log | 3.49 | 1500 | 0.0891 | 0.8505 | 0.91 | 0.8792 | 0.3000 | | No log | 3.49 | 1500 | 0.0366 | 0.8413 | 0.7990 | 0.8196 | 0.5 | | No log | 3.49 | 1500 | 0.0371 | 0.7908 | 0.775 | 0.7828 | 0.4 | | No log | 3.49 | 1500 | 0.0426 | 0.7059 | 0.66 | 0.6822 | 0.5 | | No log | 3.49 | 1500 | 0.0739 | 0.8387 | 0.78 | 0.8083 | 0.6 | | No log | 3.49 | 1500 | 0.0600 | 0.6531 | 0.8 | 0.7191 | 0.4 | | No log | 3.49 | 1500 | 0.1229 | 0.8295 | 0.9 | 0.8633 | 0.2 | | No log | 3.49 | 1500 | 0.0611 | 0.8490 | 0.815 | 0.8316 | 0.7000 | | No log | 3.49 | 1500 | 0.1271 | 0.7022 | 0.79 | 0.7435 | 0.5 | | No log | 3.49 | 1500 | 0.0190 | 0.8150 | 0.7085 | 0.7581 | 0.7000 | | No log | 3.49 | 1500 | 0.2968 | 0.1597 | 0.535 | 0.2460 | 0.001 | | No log | 3.49 | 1500 | 0.1242 | 0.7598 | 0.775 | 0.7673 | 0.3000 | | No log | 3.49 | 1500 | 0.0664 | 0.8 | 0.84 | 0.8195 | 0.6 | | No log | 3.49 | 1500 | 0.0677 | 0.8636 | 0.855 | 0.8593 | 0.6 | | No log | 3.49 | 1500 | 0.1111 | 0.8571 | 0.72 | 0.7826 | 0.4 | | No log | 3.49 | 1500 | 0.0673 | 0.7860 | 0.845 | 0.8145 | 0.5 | | No log | 3.49 | 1500 | 0.1241 | 0.8309 | 0.86 | 0.8452 | 0.2 | | No log | 3.49 | 1500 | 0.0533 | 0.7427 | 0.895 | 0.8118 | 0.4 | | No log | 3.49 | 1500 | 0.0788 | 0.7371 | 0.925 | 0.8204 | 0.3000 | | No log | 3.49 | 1500 | 0.0241 | 0.7679 | 0.86 | 0.8113 | 0.4 | | No log | 3.49 | 1500 | 0.0980 | 0.7525 | 0.76 | 0.7562 | 0.5 | | No log | 3.49 | 1500 | 0.0452 | 0.7945 | 0.87 | 0.8305 | 0.6 | | No log | 3.49 | 1500 | 0.0730 | 0.7658 | 0.85 | 0.8057 | 0.4 | | No log | 3.49 | 1500 | 0.0460 | 0.7588 | 0.865 | 0.8084 | 0.4 | | No log | 3.49 | 1500 | 0.1029 | 0.7455 | 0.82 | 0.7810 | 0.4 | | No log | 3.49 | 1500 | 0.0436 | 0.8844 | 0.88 | 0.8822 | 0.6 | | No log | 3.49 | 1500 | 0.0731 | 0.775 | 0.62 | 0.6889 | 0.4 | | No log | 3.49 | 1500 | 0.1198 | 0.6983 | 0.81 | 0.75 | 0.3000 | | No log | 3.49 | 1500 | 0.0264 | 0.8865 | 0.82 | 0.8519 | 0.5 | | No log | 3.49 | 1500 | 0.0780 | 0.7479 | 0.875 | 0.8065 | 0.4 | | No log | 3.49 | 1500 | 0.0943 | 0.7980 | 0.81 | 0.8040 | 0.5 | | No log | 3.49 | 1500 | 0.0779 | 0.7293 | 0.835 | 0.7786 | 0.5 | | No log | 3.49 | 1500 | 0.0319 | 0.7808 | 0.855 | 0.8162 | 0.4 | | No log | 3.49 | 1500 | 0.0203 | 0.7819 | 0.95 | 0.8578 | 0.099 | | No log | 3.49 | 1500 | 0.1126 | 0.7277 | 0.815 | 0.7689 | 0.5 | | No log | 3.49 | 1500 | 0.0633 | 0.5911 | 0.665 | 0.6259 | 0.3000 | | No log | 3.49 | 1500 | 0.1672 | 0.8358 | 0.84 | 0.8379 | 0.074 | | No log | 3.49 | 1500 | 0.1286 | 0.4264 | 0.55 | 0.4803 | 0.064 | | No log | 3.49 | 1500 | 0.1475 | 0.4362 | 0.6566 | 0.5242 | 0.004 | | No log | 3.49 | 1500 | 0.0784 | 0.855 | 0.855 | 0.855 | 0.3000 | | No log | 3.49 | 1500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 | | No log | 3.49 | 1500 | 0.0100 | 0.7964 | 0.8934 | 0.8421 | 0.5 | | No log | 3.49 | 1500 | 0.0037 | 0.9657 | 0.985 | 0.9752 | 0.5 | | No log | 3.49 | 1500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.056 | | No log | 3.49 | 1500 | 0.0029 | 1.0 | 1.0 | 1.0 | 0.2 | | No log | 3.49 | 1500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.2 | | No log | 3.49 | 1500 | 0.0029 | 0.9947 | 1.0 | 0.9973 | 0.008 | | No log | 3.49 | 1500 | 0.0023 | 0.9851 | 0.995 | 0.9900 | 0.4 | | No log | 3.49 | 1500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 | | No log | 3.49 | 1500 | 0.0014 | 0.9900 | 0.995 | 0.9925 | 0.3000 | | No log | 3.49 | 1500 | 0.0016 | 0.9901 | 1.0 | 0.9950 | 0.3000 | | No log | 3.49 | 1500 | 0.0076 | 0.9899 | 0.98 | 0.9849 | 0.0260 | | No log | 3.49 | 1500 | 0.0189 | 0.9946 | 0.925 | 0.9585 | 0.4 | | No log | 3.49 | 1500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 | | No log | 3.49 | 1500 | 0.0137 | 0.9692 | 0.945 | 0.9570 | 0.2 | | No log | 3.49 | 1500 | 0.0006 | 1.0 | 1.0 | 1.0 | 0.4 | | No log | 3.49 | 1500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.002 | | No log | 3.49 | 1500 | 0.0030 | 0.9949 | 0.985 | 0.9899 | 0.4 | | No log | 3.49 | 1500 | 0.0024 | 1.0 | 0.98 | 0.9899 | 0.9 | | No log | 3.49 | 1500 | 0.0020 | 0.9949 | 0.975 | 0.9848 | 0.9 | | No log | 3.49 | 1500 | 0.0380 | 0.9314 | 0.815 | 0.8693 | 0.7000 | | No log | 3.49 | 1500 | 0.0008 | 0.9950 | 1.0 | 0.9975 | 0.0260 | | No log | 3.49 | 1500 | 0.0022 | 0.9755 | 0.995 | 0.9851 | 0.046 | | No log | 3.49 | 1500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.0880 | | No log | 3.49 | 1500 | 0.0007 | 1.0 | 0.995 | 0.9975 | 0.7000 | | No log | 3.49 | 1500 | 0.0018 | 0.9900 | 0.995 | 0.9925 | 0.049 | | No log | 3.49 | 1500 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.07 | | No log | 3.49 | 1500 | 0.0045 | 0.9343 | 0.995 | 0.9637 | 0.2 | | No log | 3.49 | 1500 | 0.0013 | 0.9901 | 1.0 | 0.9950 | 0.09 | | No log | 3.49 | 1500 | 0.0177 | 0.9948 | 0.95 | 0.9719 | 0.084 | | No log | 3.49 | 1500 | 0.0008 | 1.0 | 1.0 | 1.0 | 0.6 | | No log | 3.49 | 1500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.029 | | No log | 3.49 | 1500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.002 | | No log | 3.49 | 1500 | 0.0012 | 1.0 | 1.0 | 1.0 | 0.7000 | | No log | 3.49 | 1500 | 0.0042 | 0.9662 | 1.0 | 0.9828 | 0.024 | | No log | 3.49 | 1500 | 0.0007 | 1.0 | 0.995 | 0.9975 | 0.6 | | No log | 3.49 | 1500 | 0.0056 | 0.9792 | 1.0 | 0.9895 | 0.5 | | No log | 3.49 | 1500 | 0.0052 | 0.9279 | 0.965 | 0.9461 | 0.2 | | No log | 3.49 | 1500 | 0.0091 | 0.9346 | 1.0 | 0.9662 | 0.024 | | No log | 3.49 | 1500 | 0.0020 | 0.995 | 0.995 | 0.995 | 0.5 | | No log | 3.49 | 1500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 | | No log | 3.49 | 1500 | 0.0021 | 0.9851 | 0.995 | 0.9900 | 0.6 | | No log | 3.49 | 1500 | 0.0017 | 0.9949 | 0.985 | 0.9899 | 0.3000 | | No log | 3.49 | 1500 | 0.0160 | 0.9231 | 0.96 | 0.9412 | 0.6 | | No log | 3.49 | 1500 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.6 | | No log | 3.49 | 1500 | 0.0029 | 0.99 | 0.99 | 0.99 | 0.6 | | No log | 3.49 | 1500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.007 | | No log | 3.49 | 1500 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.049 | | No log | 3.49 | 1500 | 0.0105 | 0.9606 | 0.975 | 0.9677 | 0.3000 | | No log | 3.49 | 1500 | 0.0017 | 0.9917 | 1.0 | 0.9959 | 0.004 | | No log | 3.49 | 1500 | 0.0224 | 0.8702 | 0.905 | 0.8873 | 0.2 | | No log | 3.49 | 1500 | 0.0085 | 0.9950 | 1.0 | 0.9975 | 0.002 | | No log | 3.49 | 1500 | 0.0006 | 1.0 | 1.0 | 1.0 | 0.9 | | No log | 3.49 | 1500 | 0.0010 | 0.9901 | 1.0 | 0.9950 | 0.0190 | | No log | 3.49 | 1500 | 0.0012 | 1.0 | 0.995 | 0.9975 | 0.5 | | No log | 3.49 | 1500 | 0.0149 | 0.7661 | 0.95 | 0.8482 | 0.02 | | No log | 3.49 | 1500 | 0.0198 | 0.9610 | 0.985 | 0.9728 | 0.002 | | No log | 3.49 | 1500 | 0.0077 | 0.9844 | 0.945 | 0.9643 | 0.6 | | No log | 3.49 | 1500 | 0.0196 | 0.6864 | 0.8223 | 0.7483 | 0.6 | | No log | 3.49 | 1500 | 0.0129 | 0.8673 | 0.85 | 0.8586 | 0.5 | | No log | 3.49 | 1500 | 0.0637 | 0.7576 | 0.875 | 0.8121 | 0.3000 | | No log | 3.49 | 1500 | 0.1478 | 0.72 | 0.6429 | 0.6792 | 0.2 | | No log | 3.49 | 1500 | 0.0091 | 0.8952 | 0.94 | 0.9171 | 0.3000 | | No log | 3.49 | 1500 | 0.1107 | 0.7772 | 0.7979 | 0.7874 | 0.4 | | No log | 3.49 | 1500 | 0.0426 | 0.8182 | 0.855 | 0.8362 | 0.4 | | No log | 3.49 | 1500 | 0.0401 | 0.7905 | 0.83 | 0.8098 | 0.3000 | | No log | 3.49 | 1500 | 0.0373 | 0.8502 | 0.88 | 0.8649 | 0.3000 | | No log | 3.49 | 1500 | 0.0389 | 0.8544 | 0.88 | 0.8670 | 0.4 | | No log | 3.49 | 1500 | 0.0309 | 0.7972 | 0.865 | 0.8297 | 0.2 | | No log | 3.49 | 1500 | 0.0493 | 0.8061 | 0.79 | 0.7980 | 0.4 | | No log | 3.49 | 1500 | 0.0304 | 0.6904 | 0.825 | 0.7517 | 0.3000 | | No log | 3.49 | 1500 | 0.0584 | 0.7433 | 0.695 | 0.7183 | 0.5 | | No log | 3.49 | 1500 | 0.0635 | 0.7227 | 0.86 | 0.7854 | 0.2 | | No log | 3.49 | 1500 | 0.0068 | 0.9614 | 0.995 | 0.9779 | 0.0880 | | No log | 3.49 | 1500 | 0.0513 | 0.7424 | 0.735 | 0.7387 | 0.5 | | No log | 3.49 | 1500 | 0.0334 | 0.8142 | 0.745 | 0.7781 | 0.5 | | No log | 3.49 | 1500 | 0.0376 | 0.8333 | 0.7071 | 0.7650 | 0.6 | | No log | 3.49 | 1500 | 0.0636 | 0.7243 | 0.7789 | 0.7506 | 0.4 | | No log | 3.49 | 1500 | 0.0326 | 0.7895 | 0.75 | 0.7692 | 0.4 | | No log | 3.49 | 1500 | 0.0244 | 0.8416 | 0.85 | 0.8458 | 0.4 | | No log | 3.49 | 1500 | 0.0091 | 0.9492 | 0.935 | 0.9421 | 0.5 | | No log | 3.49 | 1500 | 0.0396 | 0.6510 | 0.83 | 0.7297 | 0.3000 | | No log | 3.49 | 1500 | 0.0396 | 0.7108 | 0.885 | 0.7884 | 0.2 | | No log | 3.49 | 1500 | 0.0419 | 0.8624 | 0.815 | 0.8380 | 0.5 | | No log | 3.49 | 1500 | 0.0338 | 0.7059 | 0.72 | 0.7129 | 0.5 | | No log | 3.49 | 1500 | 0.0402 | 0.7294 | 0.795 | 0.7608 | 0.4 | | No log | 3.49 | 1500 | 0.1008 | 0.8452 | 0.71 | 0.7717 | 0.047 | | No log | 3.49 | 1500 | 0.0868 | 0.6145 | 0.765 | 0.6815 | 0.2 | | No log | 3.49 | 1500 | 0.0253 | 0.9794 | 0.95 | 0.9645 | 0.2 | | No log | 3.49 | 1500 | 0.0013 | 1.0 | 1.0 | 1.0 | 0.0260 | | No log | 3.49 | 1500 | 0.0197 | 0.9187 | 0.96 | 0.9389 | 0.2 | | No log | 3.49 | 1500 | 0.0539 | 0.7474 | 0.725 | 0.7360 | 0.4 | | No log | 3.49 | 1500 | 0.0566 | 0.7805 | 0.64 | 0.7033 | 0.6 | | No log | 3.49 | 1500 | 0.1275 | 0.6522 | 0.6383 | 0.6452 | 0.5 | | No log | 3.49 | 1500 | 0.0366 | 0.6161 | 0.65 | 0.6326 | 0.3000 | | No log | 3.49 | 1500 | 0.0274 | 0.8219 | 0.9 | 0.8592 | 0.3000 | | No log | 3.49 | 1500 | 0.0371 | 0.8273 | 0.91 | 0.8667 | 0.4 | | No log | 3.49 | 1500 | 0.0108 | 0.9745 | 0.955 | 0.9646 | 0.7000 | | No log | 3.49 | 1500 | 0.0399 | 0.8656 | 0.805 | 0.8342 | 0.5 | | No log | 3.49 | 1500 | 0.0353 | 0.7174 | 0.66 | 0.6875 | 0.6 | | No log | 3.49 | 1500 | 0.0845 | 0.5823 | 0.725 | 0.6459 | 0.3000 | | No log | 3.49 | 1500 | 0.0311 | 0.9096 | 0.855 | 0.8814 | 0.5 | | No log | 3.49 | 1500 | 0.0278 | 0.8039 | 0.82 | 0.8119 | 0.5 | | No log | 3.49 | 1500 | 0.0126 | 0.9206 | 0.9667 | 0.9431 | 0.3000 | | No log | 3.49 | 1500 | 0.0169 | 0.9105 | 0.865 | 0.8872 | 0.6 | | No log | 3.49 | 1500 | 0.0323 | 0.9167 | 0.88 | 0.8980 | 0.6 | | No log | 3.49 | 1500 | 0.0107 | 0.848 | 0.8833 | 0.8653 | 0.3000 | | No log | 3.49 | 1500 | 0.0246 | 0.8812 | 0.89 | 0.8856 | 0.3000 | | No log | 3.49 | 1500 | 0.0634 | 0.8219 | 0.9 | 0.8592 | 0.2 | | No log | 3.49 | 1500 | 0.0572 | 0.6266 | 0.73 | 0.6744 | 0.3000 | | No log | 3.49 | 1500 | 0.0037 | 0.9851 | 0.99 | 0.9875 | 0.4 | | No log | 3.49 | 1500 | 0.0933 | 0.5731 | 0.745 | 0.6478 | 0.3000 | | No log | 3.49 | 1500 | 0.0454 | 0.7647 | 0.325 | 0.4561 | 0.5 | | No log | 3.49 | 1500 | 0.0648 | 0.7684 | 0.73 | 0.7487 | 0.09 | | No log | 3.49 | 1500 | 0.1281 | 0.6154 | 0.8 | 0.6957 | 0.2 | | No log | 3.49 | 1500 | 0.0939 | 0.5159 | 0.81 | 0.6304 | 0.081 | | No log | 3.49 | 1500 | 0.0795 | 0.5417 | 0.78 | 0.6393 | 0.07 | | No log | 3.49 | 1500 | 0.1432 | 0.7289 | 0.82 | 0.7718 | 0.0430 | | No log | 3.49 | 1500 | 0.0706 | 0.6378 | 0.81 | 0.7137 | 0.3000 | | No log | 3.49 | 1500 | 0.0549 | 0.7082 | 0.825 | 0.7621 | 0.3000 | | No log | 3.49 | 1500 | 0.0491 | 0.4896 | 0.4724 | 0.4808 | 0.3000 | | No log | 3.49 | 1500 | 0.0641 | 0.6041 | 0.885 | 0.7181 | 0.2 | | No log | 3.49 | 1500 | 0.0791 | 0.5181 | 0.7818 | 0.6232 | 0.2 | | No log | 3.49 | 1500 | 0.0634 | 0.7843 | 0.8 | 0.7921 | 0.2 | | No log | 3.49 | 1500 | 0.0634 | 0.7843 | 0.8 | 0.7921 | 0.2 | | No log | 3.49 | 1500 | 0.0539 | 0.7330 | 0.81 | 0.7696 | 0.2 | | No log | 3.49 | 1500 | 0.0756 | 0.7203 | 0.85 | 0.7798 | 0.095 | | No log | 3.49 | 1500 | 0.0523 | 0.6232 | 0.86 | 0.7227 | 0.1 | | No log | 3.49 | 1500 | 0.0547 | 0.7330 | 0.755 | 0.7438 | 0.4 | | No log | 3.49 | 1500 | 0.1021 | 0.6753 | 0.655 | 0.6650 | 0.3000 | | No log | 3.49 | 1500 | 0.0615 | 0.6121 | 0.86 | 0.7152 | 0.2 | | No log | 3.49 | 1500 | 0.0539 | 0.7926 | 0.86 | 0.8249 | 0.3000 | | No log | 3.49 | 1500 | 0.1689 | 0.5171 | 0.68 | 0.5875 | 0.1 | | No log | 3.49 | 1500 | 0.0733 | 0.6987 | 0.8 | 0.7459 | 0.2 | | No log | 3.49 | 1500 | 0.1298 | 0.6723 | 0.8 | 0.7306 | 0.067 | | No log | 3.49 | 1500 | 0.0619 | 0.6525 | 0.845 | 0.7364 | 0.066 | | No log | 3.49 | 1500 | 0.0619 | 0.6525 | 0.845 | 0.7364 | 0.066 | | No log | 3.49 | 1500 | 0.0372 | 0.6857 | 0.5217 | 0.5926 | 0.5 | | No log | 3.49 | 1500 | 0.0372 | 0.6857 | 0.5217 | 0.5926 | 0.5 | | No log | 3.49 | 1500 | 0.0588 | 0.7065 | 0.71 | 0.7082 | 0.3000 | | No log | 3.49 | 1500 | 0.0489 | 0.5586 | 0.405 | 0.4696 | 0.5 | | No log | 3.49 | 1500 | 0.0760 | 0.3333 | 0.9231 | 0.4898 | 0.012 | | No log | 3.49 | 1500 | 0.0508 | 0.6008 | 0.73 | 0.6591 | 0.4 | | No log | 3.49 | 1500 | 0.0449 | 0.5121 | 0.74 | 0.6053 | 0.2 | | No log | 3.49 | 1500 | 0.0638 | 0.7089 | 0.755 | 0.7312 | 0.3000 | | No log | 3.49 | 1500 | 0.0530 | 0.6961 | 0.71 | 0.7030 | 0.6 | | No log | 3.49 | 1500 | 0.0759 | 0.6681 | 0.765 | 0.7133 | 0.3000 | | No log | 3.49 | 1500 | 0.1174 | 0.2329 | 0.4904 | 0.3158 | 0.0860 | | No log | 3.49 | 1500 | 0.1085 | 0.6360 | 0.83 | 0.7202 | 0.0870 | | No log | 3.49 | 1500 | 0.0675 | 0.6584 | 0.8 | 0.7223 | 0.4 | | No log | 3.49 | 1500 | 0.2868 | 0.4108 | 0.645 | 0.5019 | 0.001 | | No log | 3.49 | 1500 | 0.1125 | 0.7100 | 0.82 | 0.7610 | 0.6 | | No log | 3.49 | 1500 | 0.0842 | 0.5808 | 0.755 | 0.6565 | 0.2 | | No log | 3.49 | 1500 | 0.1597 | 0.7762 | 0.815 | 0.7951 | 0.092 | | No log | 3.49 | 1500 | 0.1477 | 1.0 | 0.18 | 0.3051 | 0.9 | | No log | 3.49 | 1500 | 0.0848 | 0.4131 | 0.535 | 0.4662 | 0.084 | | No log | 3.49 | 1500 | 0.0690 | 0.7942 | 0.965 | 0.8713 | 0.7000 | | No log | 3.49 | 1500 | 0.0581 | 0.4255 | 0.6 | 0.4979 | 0.0300 | | No log | 3.49 | 1500 | 0.0743 | 0.7512 | 0.77 | 0.7605 | 0.3000 | | No log | 3.49 | 1500 | 0.1125 | 0.4 | 0.8333 | 0.5405 | 0.085 | | No log | 3.49 | 1500 | 0.0684 | 0.6612 | 0.81 | 0.7281 | 0.2 | | No log | 3.49 | 1500 | 0.0578 | 0.8212 | 0.735 | 0.7757 | 0.5 | | No log | 3.49 | 1500 | 0.0925 | 0.6056 | 0.545 | 0.5737 | 0.2 | | No log | 3.49 | 1500 | 0.0645 | 0.7537 | 0.765 | 0.7593 | 0.3000 | | No log | 3.49 | 1500 | 0.0561 | 0.4956 | 0.565 | 0.5280 | 0.4 | | No log | 3.49 | 1500 | 0.0433 | 0.7965 | 0.9 | 0.8451 | 0.089 | | No log | 3.49 | 1500 | 0.1178 | 0.3768 | 0.52 | 0.4370 | 0.5 | | No log | 3.49 | 1500 | 0.0574 | 0.6914 | 0.8894 | 0.7780 | 0.2 | | No log | 3.49 | 1500 | 0.0937 | 0.4437 | 0.6893 | 0.5399 | 0.095 | | No log | 3.49 | 1500 | 0.1404 | 0.8889 | 0.2 | 0.3265 | 0.9 | | No log | 3.49 | 1500 | 0.0602 | 0.7377 | 0.675 | 0.7050 | 0.3000 | | No log | 3.49 | 1500 | 0.0797 | 0.7767 | 0.835 | 0.8048 | 0.2 | | No log | 3.49 | 1500 | 0.0797 | 0.7767 | 0.835 | 0.8048 | 0.2 | | No log | 3.49 | 1500 | 0.0593 | 0.4730 | 0.745 | 0.5786 | 0.0590 | | No log | 3.49 | 1500 | 0.0841 | 0.7161 | 0.8535 | 0.7788 | 0.2 | | No log | 3.49 | 1500 | 0.0569 | 0.6267 | 0.7085 | 0.6651 | 0.4 | | No log | 3.49 | 1500 | 0.0612 | 0.7130 | 0.82 | 0.7628 | 0.4 | | No log | 3.49 | 1500 | 0.0631 | 0.7629 | 0.74 | 0.7513 | 0.3000 | | No log | 3.49 | 1500 | 0.0490 | 0.7536 | 0.78 | 0.7666 | 0.3000 | | No log | 3.49 | 1500 | 0.0582 | 0.4967 | 0.75 | 0.5976 | 0.3000 | | No log | 3.49 | 1500 | 0.0659 | 0.6929 | 0.835 | 0.7574 | 0.3000 | | No log | 3.49 | 1500 | 0.0569 | 0.7206 | 0.89 | 0.7964 | 0.2 | | No log | 3.49 | 1500 | 0.0533 | 0.7238 | 0.76 | 0.7415 | 0.3000 | | No log | 3.49 | 1500 | 0.0455 | 0.8118 | 0.755 | 0.7824 | 0.4 | | No log | 3.49 | 1500 | 0.0900 | 0.625 | 0.825 | 0.7112 | 0.083 | | No log | 3.49 | 1500 | 0.0525 | 0.5161 | 0.48 | 0.4974 | 0.3000 | | No log | 3.49 | 1500 | 0.0626 | 0.4664 | 0.625 | 0.5342 | 0.3000 | | No log | 3.49 | 1500 | 0.0719 | 0.5978 | 0.825 | 0.6933 | 0.0710 | | No log | 3.49 | 1500 | 0.0655 | 0.4901 | 0.87 | 0.6270 | 0.077 | | No log | 3.49 | 1500 | 0.0671 | 0.8053 | 0.765 | 0.7846 | 0.6 | | No log | 3.49 | 1500 | 0.0498 | 0.7612 | 0.765 | 0.7631 | 0.4 | | No log | 3.49 | 1500 | 0.1001 | 0.5249 | 0.685 | 0.5944 | 0.3000 | | No log | 3.49 | 1500 | 0.0841 | 0.6811 | 0.865 | 0.7621 | 0.08 | | No log | 3.49 | 1500 | 0.0552 | 0.6132 | 0.745 | 0.6727 | 0.4 | | No log | 3.49 | 1500 | 0.0552 | 0.6132 | 0.745 | 0.6727 | 0.4 | | No log | 3.49 | 1500 | 0.0552 | 0.6132 | 0.745 | 0.6727 | 0.4 | | No log | 3.49 | 1500 | 0.0552 | 0.6132 | 0.745 | 0.6727 | 0.4 | | No log | 3.49 | 1500 | 0.2621 | 0.2857 | 0.5930 | 0.3856 | 0.001 | | No log | 3.49 | 1500 | 0.0931 | 0.5649 | 0.8090 | 0.6653 | 0.039 | | No log | 3.49 | 1500 | 0.0197 | 0.9366 | 0.96 | 0.9481 | 0.3000 | | No log | 3.49 | 1500 | 0.0020 | 0.9901 | 1.0 | 0.9950 | 0.3000 | | No log | 3.49 | 1500 | 0.0032 | 1.0 | 0.99 | 0.9950 | 0.2 | | No log | 3.49 | 1500 | 0.0004 | 1.0 | 0.995 | 0.9975 | 0.6 | | No log | 3.49 | 1500 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.2 | | No log | 3.49 | 1500 | 0.0009 | 0.9950 | 1.0 | 0.9975 | 0.3000 | | No log | 3.49 | 1500 | 0.0021 | 1.0 | 0.995 | 0.9975 | 0.8 | | No log | 3.49 | 1500 | 0.0036 | 0.9950 | 0.99 | 0.9925 | 0.9 | | No log | 3.49 | 1500 | 0.0038 | 0.98 | 0.98 | 0.98 | 0.3000 | | No log | 3.49 | 1500 | 0.0077 | 0.9851 | 0.995 | 0.9900 | 0.008 | | No log | 3.49 | 1500 | 0.0254 | 0.9347 | 0.93 | 0.9323 | 0.047 | | No log | 3.49 | 1500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 | | No log | 3.49 | 1500 | 0.0504 | 0.8108 | 0.75 | 0.7792 | 0.2 | | No log | 3.49 | 1500 | 0.0015 | 0.9950 | 1.0 | 0.9975 | 0.9 | | No log | 3.49 | 1500 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.3000 | | No log | 3.49 | 1500 | 0.0036 | 0.9950 | 0.99 | 0.9925 | 0.3000 | | No log | 3.49 | 1500 | 0.0072 | 0.96 | 0.96 | 0.96 | 0.7000 | | No log | 3.49 | 1500 | 0.0009 | 1.0 | 0.995 | 0.9975 | 0.5 | | No log | 3.49 | 1500 | 0.0007 | 1.0 | 0.995 | 0.9975 | 0.6 | | No log | 3.49 | 1500 | 0.0028 | 0.995 | 0.995 | 0.995 | 0.2 | | No log | 3.49 | 1500 | 0.0014 | 1.0 | 0.995 | 0.9975 | 0.3000 | | No log | 3.49 | 1500 | 0.0270 | 0.9282 | 0.905 | 0.9165 | 0.0220 | | No log | 3.49 | 1500 | 0.1419 | 0.3299 | 0.64 | 0.4354 | 0.6 | | No log | 3.49 | 1500 | 0.1089 | 0.3357 | 0.3310 | 0.3333 | 0.2 | | No log | 3.49 | 1500 | 0.1393 | 0.6684 | 0.625 | 0.6460 | 0.4 | | No log | 3.49 | 1500 | 0.1239 | 0.5214 | 0.73 | 0.6083 | 0.3000 | | No log | 4.65 | 2000 | 0.0509 | 0.9672 | 0.885 | 0.9243 | 0.7000 | | No log | 4.65 | 2000 | 0.0159 | 0.9333 | 0.84 | 0.8842 | 0.6 | | No log | 4.65 | 2000 | 0.0412 | 0.8194 | 0.885 | 0.8510 | 0.3000 | | No log | 4.65 | 2000 | 0.0181 | 0.9179 | 0.895 | 0.9063 | 0.3000 | | No log | 4.65 | 2000 | 0.0371 | 0.9722 | 0.875 | 0.9211 | 0.7000 | | No log | 4.65 | 2000 | 0.0108 | 0.9803 | 0.995 | 0.9876 | 0.7000 | | No log | 4.65 | 2000 | 0.0188 | 0.8995 | 0.9447 | 0.9216 | 0.3000 | | No log | 4.65 | 2000 | 0.0130 | 0.9610 | 0.985 | 0.9728 | 0.4 | | No log | 4.65 | 2000 | 0.0161 | 0.9256 | 0.995 | 0.9590 | 0.6 | | No log | 4.65 | 2000 | 0.0483 | 0.9457 | 0.87 | 0.9062 | 0.7000 | | No log | 4.65 | 2000 | 0.0114 | 0.9610 | 0.985 | 0.9728 | 0.7000 | | No log | 4.65 | 2000 | 0.0143 | 0.94 | 0.94 | 0.94 | 0.7000 | | No log | 4.65 | 2000 | 0.0083 | 0.9612 | 0.99 | 0.9754 | 0.4 | | No log | 4.65 | 2000 | 0.0218 | 0.9387 | 0.995 | 0.9660 | 0.5 | | No log | 4.65 | 2000 | 0.0147 | 0.9565 | 0.99 | 0.9730 | 0.5 | | No log | 4.65 | 2000 | 0.0113 | 0.9469 | 0.98 | 0.9631 | 0.6 | | No log | 4.65 | 2000 | 0.0131 | 0.9507 | 0.9797 | 0.965 | 0.8 | | No log | 4.65 | 2000 | 0.0113 | 0.9050 | 1.0 | 0.9501 | 0.083 | | No log | 4.65 | 2000 | 0.0526 | 0.8194 | 0.9347 | 0.8732 | 0.4 | | No log | 4.65 | 2000 | 0.0175 | 0.9302 | 1.0 | 0.9639 | 0.082 | | No log | 4.65 | 2000 | 0.0165 | 0.8964 | 0.995 | 0.9431 | 0.0130 | | No log | 4.65 | 2000 | 0.0604 | 0.9737 | 0.925 | 0.9487 | 0.2 | | No log | 4.65 | 2000 | 0.0085 | 0.9691 | 0.94 | 0.9543 | 0.6 | | No log | 4.65 | 2000 | 0.0065 | 0.9848 | 0.9848 | 0.9848 | 0.2 | | No log | 4.65 | 2000 | 0.0215 | 0.9471 | 0.985 | 0.9657 | 0.4 | | No log | 4.65 | 2000 | 0.0531 | 0.9053 | 0.86 | 0.8821 | 0.5 | | No log | 4.65 | 2000 | 0.0071 | 0.9108 | 0.9749 | 0.9417 | 0.2 | | No log | 4.65 | 2000 | 0.0212 | 0.9409 | 0.955 | 0.9479 | 0.7000 | | No log | 4.65 | 2000 | 0.0216 | 0.91 | 0.91 | 0.91 | 0.6 | | No log | 4.65 | 2000 | 0.0163 | 0.9519 | 0.99 | 0.9706 | 0.3000 | | No log | 4.65 | 2000 | 0.0448 | 0.9112 | 0.975 | 0.9420 | 0.034 | | No log | 4.65 | 2000 | 0.0302 | 0.9249 | 0.985 | 0.9540 | 0.6 | | No log | 4.65 | 2000 | 0.0156 | 0.9417 | 0.97 | 0.9557 | 0.0590 | | No log | 4.65 | 2000 | 0.0122 | 0.9707 | 0.995 | 0.9827 | 0.3000 | | No log | 4.65 | 2000 | 0.0120 | 0.97 | 0.97 | 0.97 | 0.5 | | No log | 4.65 | 2000 | 0.1964 | 0.6219 | 0.88 | 0.7288 | 0.0090 | | No log | 4.65 | 2000 | 0.0097 | 0.9420 | 0.975 | 0.9582 | 0.4 | | No log | 4.65 | 2000 | 0.0247 | 0.9704 | 0.985 | 0.9777 | 0.07 | | No log | 4.65 | 2000 | 0.1652 | 0.6486 | 0.6030 | 0.625 | 0.048 | | No log | 4.65 | 2000 | 0.0496 | 0.9836 | 0.9 | 0.9399 | 0.3000 | | No log | 4.65 | 2000 | 0.0623 | 0.8564 | 0.865 | 0.8607 | 0.7000 | | No log | 4.65 | 2000 | 0.0123 | 0.9742 | 0.945 | 0.9594 | 0.2 | | No log | 4.65 | 2000 | 0.0156 | 0.9755 | 0.995 | 0.9851 | 0.4 | | No log | 4.65 | 2000 | 0.0079 | 0.965 | 0.9698 | 0.9674 | 0.4 | | No log | 4.65 | 2000 | 0.0067 | 0.9844 | 0.945 | 0.9643 | 0.9 | | No log | 4.65 | 2000 | 0.0047 | 0.9744 | 0.95 | 0.9620 | 0.9 | | No log | 4.65 | 2000 | 0.0102 | 0.9524 | 1.0 | 0.9756 | 0.2 | | No log | 4.65 | 2000 | 0.0348 | 0.8837 | 0.95 | 0.9157 | 0.5 | | No log | 4.65 | 2000 | 0.0198 | 0.9849 | 0.98 | 0.9825 | 0.7000 | | No log | 4.65 | 2000 | 0.0242 | 0.9565 | 0.99 | 0.9730 | 0.3000 | | No log | 4.65 | 2000 | 0.0208 | 0.9608 | 0.98 | 0.9703 | 0.8 | | No log | 4.65 | 2000 | 0.0061 | 0.945 | 0.9594 | 0.9521 | 0.9 | | No log | 4.65 | 2000 | 0.1991 | 0.5143 | 0.72 | 0.6 | 0.005 | | No log | 4.65 | 2000 | 0.0691 | 0.8989 | 0.845 | 0.8711 | 0.5 | | No log | 4.65 | 2000 | 0.0162 | 0.9431 | 0.995 | 0.9684 | 0.3000 | | No log | 4.65 | 2000 | 0.0161 | 0.9614 | 0.995 | 0.9779 | 0.6 | | No log | 4.65 | 2000 | 0.1176 | 0.8473 | 0.86 | 0.8536 | 0.2 | | No log | 4.65 | 2000 | 0.0096 | 0.9614 | 0.995 | 0.9779 | 0.5 | | No log | 4.65 | 2000 | 0.1098 | 0.9314 | 0.815 | 0.8693 | 0.6 | | No log | 4.65 | 2000 | 0.0107 | 0.9565 | 0.99 | 0.9730 | 0.8 | | No log | 4.65 | 2000 | 0.0088 | 0.985 | 0.985 | 0.985 | 0.7000 | | No log | 4.65 | 2000 | 0.0125 | 0.9194 | 0.97 | 0.9440 | 0.8 | | No log | 4.65 | 2000 | 0.0476 | 0.8768 | 0.925 | 0.9002 | 0.4 | | No log | 4.65 | 2000 | 0.0066 | 0.9660 | 0.995 | 0.9803 | 0.5 | | No log | 4.65 | 2000 | 0.0120 | 0.9522 | 0.995 | 0.9731 | 0.6 | | No log | 4.65 | 2000 | 0.0094 | 0.9563 | 0.985 | 0.9704 | 0.7000 | | No log | 4.65 | 2000 | 0.0162 | 0.9567 | 0.995 | 0.9755 | 0.2 | | No log | 4.65 | 2000 | 0.0097 | 0.9569 | 1.0 | 0.9780 | 0.2 | | No log | 4.65 | 2000 | 0.0376 | 0.9043 | 0.85 | 0.8763 | 0.3000 | | No log | 4.65 | 2000 | 0.0650 | 0.9396 | 0.855 | 0.8953 | 0.7000 | | No log | 4.65 | 2000 | 0.0114 | 0.8959 | 0.99 | 0.9406 | 0.081 | | No log | 4.65 | 2000 | 0.0770 | 0.7945 | 0.87 | 0.8305 | 0.5 | | No log | 4.65 | 2000 | 0.0144 | 0.9519 | 0.99 | 0.9706 | 0.2 | | No log | 4.65 | 2000 | 0.0325 | 0.9041 | 0.99 | 0.9451 | 0.2 | | No log | 4.65 | 2000 | 0.0210 | 0.8364 | 0.92 | 0.8762 | 0.4 | | No log | 4.65 | 2000 | 0.0083 | 0.9108 | 0.97 | 0.9395 | 0.2 | | No log | 4.65 | 2000 | 0.0232 | 0.9381 | 0.985 | 0.9610 | 0.3000 | | No log | 4.65 | 2000 | 0.0162 | 0.9130 | 0.945 | 0.9287 | 0.3000 | | No log | 4.65 | 2000 | 0.1165 | 0.8619 | 0.905 | 0.8829 | 0.011 | | No log | 4.65 | 2000 | 0.0321 | 0.8517 | 0.89 | 0.8704 | 0.4 | | No log | 4.65 | 2000 | 0.0952 | 0.7453 | 0.7980 | 0.7707 | 0.003 | | No log | 4.65 | 2000 | 0.0261 | 0.9502 | 0.955 | 0.9526 | 0.2 | | No log | 4.65 | 2000 | 0.1213 | 0.8333 | 0.825 | 0.8291 | 0.4 | | No log | 4.65 | 2000 | 0.0260 | 0.8515 | 0.86 | 0.8557 | 0.3000 | | No log | 4.65 | 2000 | 0.0726 | 0.7056 | 0.815 | 0.7564 | 0.3000 | | No log | 4.65 | 2000 | 0.0291 | 0.8673 | 0.85 | 0.8586 | 0.3000 | | No log | 4.65 | 2000 | 0.0695 | 0.8571 | 0.87 | 0.8635 | 0.4 | | No log | 4.65 | 2000 | 0.0977 | 0.7895 | 0.825 | 0.8068 | 0.5 | | No log | 4.65 | 2000 | 0.0608 | 0.5856 | 0.5327 | 0.5579 | 0.4 | | No log | 4.65 | 2000 | 0.0628 | 0.7490 | 0.895 | 0.8155 | 0.3000 | | No log | 4.65 | 2000 | 0.0641 | 0.7844 | 0.855 | 0.8182 | 0.5 | | No log | 4.65 | 2000 | 0.1054 | 0.8360 | 0.79 | 0.8123 | 0.5 | | No log | 4.65 | 2000 | 0.0451 | 0.8045 | 0.885 | 0.8429 | 0.4 | | No log | 4.65 | 2000 | 0.0347 | 0.835 | 0.835 | 0.835 | 0.5 | | No log | 4.65 | 2000 | 0.0706 | 0.7467 | 0.84 | 0.7906 | 0.4 | | No log | 4.65 | 2000 | 0.0824 | 0.8528 | 0.84 | 0.8463 | 0.6 | | No log | 4.65 | 2000 | 0.0700 | 0.8210 | 0.94 | 0.8765 | 0.4 | | No log | 4.65 | 2000 | 0.0576 | 0.7404 | 0.87 | 0.8000 | 0.4 | | No log | 4.65 | 2000 | 0.0464 | 0.7905 | 0.8342 | 0.8117 | 0.5 | | No log | 4.65 | 2000 | 0.0600 | 0.7946 | 0.735 | 0.7636 | 0.6 | | No log | 4.65 | 2000 | 0.0808 | 0.7094 | 0.83 | 0.7650 | 0.4 | | No log | 4.65 | 2000 | 0.0647 | 0.8 | 0.88 | 0.8381 | 0.4 | | No log | 4.65 | 2000 | 0.0539 | 0.7489 | 0.82 | 0.7828 | 0.2 | | No log | 4.65 | 2000 | 0.1658 | 0.7867 | 0.885 | 0.8329 | 0.0870 | | No log | 4.65 | 2000 | 0.0264 | 0.7543 | 0.875 | 0.8102 | 0.2 | | No log | 4.65 | 2000 | 0.0443 | 0.9286 | 0.7919 | 0.8548 | 0.5 | | No log | 4.65 | 2000 | 0.0822 | 0.7564 | 0.885 | 0.8157 | 0.4 | | No log | 4.65 | 2000 | 0.0735 | 0.7656 | 0.8 | 0.7824 | 0.3000 | | No log | 4.65 | 2000 | 0.0271 | 0.7684 | 0.73 | 0.7487 | 0.4 | | No log | 4.65 | 2000 | 0.0895 | 0.75 | 0.825 | 0.7857 | 0.5 | | No log | 4.65 | 2000 | 0.0353 | 0.8523 | 0.75 | 0.7979 | 0.6 | | No log | 4.65 | 2000 | 0.0765 | 0.7465 | 0.81 | 0.7770 | 0.4 | | No log | 4.65 | 2000 | 0.1164 | 0.8380 | 0.75 | 0.7916 | 0.5 | | No log | 4.65 | 2000 | 0.1471 | 0.6929 | 0.835 | 0.7574 | 0.4 | | No log | 4.65 | 2000 | 0.0815 | 0.7143 | 0.875 | 0.7865 | 0.067 | | No log | 4.65 | 2000 | 0.0799 | 0.725 | 0.87 | 0.7909 | 0.3000 | | No log | 4.65 | 2000 | 0.0673 | 0.7860 | 0.845 | 0.8145 | 0.5 | | No log | 4.65 | 2000 | 0.4901 | 0.5 | 0.795 | 0.6139 | 0.001 | | No log | 4.65 | 2000 | 0.0768 | 0.4652 | 0.635 | 0.5370 | 0.3000 | | No log | 4.65 | 2000 | 0.1208 | 0.9153 | 0.81 | 0.8594 | 0.5 | | No log | 4.65 | 2000 | 0.2032 | 0.5118 | 0.3266 | 0.3988 | 0.2 | | No log | 4.65 | 2000 | 0.0786 | 0.8357 | 0.865 | 0.8501 | 0.2 | | No log | 4.65 | 2000 | 0.0772 | 0.7407 | 0.8 | 0.7692 | 0.5 | | No log | 4.65 | 2000 | 0.0342 | 0.8827 | 0.865 | 0.8737 | 0.2 | | No log | 4.65 | 2000 | 0.0919 | 0.8363 | 0.945 | 0.8873 | 0.2 | | No log | 4.65 | 2000 | 0.0371 | 0.8274 | 0.8191 | 0.8232 | 0.5 | | No log | 4.65 | 2000 | 0.0377 | 0.8315 | 0.765 | 0.7969 | 0.5 | | No log | 4.65 | 2000 | 0.0431 | 0.6621 | 0.725 | 0.6921 | 0.4 | | No log | 4.65 | 2000 | 0.0754 | 0.8211 | 0.78 | 0.8 | 0.6 | | No log | 4.65 | 2000 | 0.0599 | 0.6681 | 0.795 | 0.7260 | 0.4 | | No log | 4.65 | 2000 | 0.1278 | 0.7957 | 0.935 | 0.8598 | 0.065 | | No log | 4.65 | 2000 | 0.0562 | 0.8325 | 0.87 | 0.8509 | 0.6 | | No log | 4.65 | 2000 | 0.1292 | 0.75 | 0.75 | 0.75 | 0.6 | | No log | 4.65 | 2000 | 0.0189 | 0.8618 | 0.6583 | 0.7464 | 0.8 | | No log | 4.65 | 2000 | 0.3225 | 0.1507 | 0.465 | 0.2277 | 0.001 | | No log | 4.65 | 2000 | 0.1246 | 0.7548 | 0.785 | 0.7696 | 0.3000 | | No log | 4.65 | 2000 | 0.0674 | 0.8342 | 0.805 | 0.8193 | 0.7000 | | No log | 4.65 | 2000 | 0.0657 | 0.8814 | 0.855 | 0.8680 | 0.6 | | No log | 4.65 | 2000 | 0.1103 | 0.8073 | 0.775 | 0.7908 | 0.3000 | | No log | 4.65 | 2000 | 0.0689 | 0.7982 | 0.87 | 0.8325 | 0.5 | | No log | 4.65 | 2000 | 0.1267 | 0.8325 | 0.87 | 0.8509 | 0.2 | | No log | 4.65 | 2000 | 0.0533 | 0.7458 | 0.895 | 0.8136 | 0.4 | | No log | 4.65 | 2000 | 0.0795 | 0.7574 | 0.89 | 0.8184 | 0.4 | | No log | 4.65 | 2000 | 0.0245 | 0.8265 | 0.81 | 0.8182 | 0.6 | | No log | 4.65 | 2000 | 0.0959 | 0.7358 | 0.78 | 0.7573 | 0.5 | | No log | 4.65 | 2000 | 0.0457 | 0.8342 | 0.83 | 0.8321 | 0.7000 | | No log | 4.65 | 2000 | 0.0729 | 0.8232 | 0.815 | 0.8191 | 0.5 | | No log | 4.65 | 2000 | 0.0458 | 0.7522 | 0.865 | 0.8047 | 0.4 | | No log | 4.65 | 2000 | 0.1063 | 0.77 | 0.77 | 0.7700 | 0.5 | | No log | 4.65 | 2000 | 0.0438 | 0.8738 | 0.9 | 0.8867 | 0.5 | | No log | 4.65 | 2000 | 0.0762 | 0.7074 | 0.665 | 0.6856 | 0.3000 | | No log | 4.65 | 2000 | 0.1196 | 0.7602 | 0.745 | 0.7525 | 0.4 | | No log | 4.65 | 2000 | 0.0269 | 0.8507 | 0.855 | 0.8529 | 0.3000 | | No log | 4.65 | 2000 | 0.0803 | 0.7543 | 0.875 | 0.8102 | 0.4 | | No log | 4.65 | 2000 | 0.0961 | 0.7933 | 0.825 | 0.8088 | 0.5 | | No log | 4.65 | 2000 | 0.0800 | 0.8162 | 0.755 | 0.7844 | 0.7000 | | No log | 4.65 | 2000 | 0.0321 | 0.7803 | 0.87 | 0.8227 | 0.3000 | | No log | 4.65 | 2000 | 0.0205 | 0.7886 | 0.97 | 0.8700 | 0.0880 | | No log | 4.65 | 2000 | 0.1133 | 0.7385 | 0.805 | 0.7703 | 0.5 | | No log | 4.65 | 2000 | 0.0640 | 0.6541 | 0.605 | 0.6286 | 0.4 | | No log | 4.65 | 2000 | 0.1780 | 0.8301 | 0.855 | 0.8424 | 0.032 | | No log | 4.65 | 2000 | 0.1339 | 0.4362 | 0.53 | 0.4786 | 0.068 | | No log | 4.65 | 2000 | 0.1562 | 0.4286 | 0.6970 | 0.5308 | 0.002 | | No log | 4.65 | 2000 | 0.0775 | 0.8564 | 0.865 | 0.8607 | 0.3000 | | No log | 4.65 | 2000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 | | No log | 4.65 | 2000 | 0.0100 | 0.8165 | 0.9036 | 0.8578 | 0.4 | | No log | 4.65 | 2000 | 0.0039 | 0.9655 | 0.98 | 0.9727 | 0.4 | | No log | 4.65 | 2000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.041 | | No log | 4.65 | 2000 | 0.0018 | 1.0 | 1.0 | 1.0 | 0.1 | | No log | 4.65 | 2000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.3000 | | No log | 4.65 | 2000 | 0.0028 | 0.9947 | 1.0 | 0.9973 | 0.015 | | No log | 4.65 | 2000 | 0.0024 | 0.9950 | 0.99 | 0.9925 | 0.8 | | No log | 4.65 | 2000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 | | No log | 4.65 | 2000 | 0.0012 | 0.9950 | 0.99 | 0.9925 | 0.7000 | | No log | 4.65 | 2000 | 0.0014 | 0.9901 | 1.0 | 0.9950 | 0.4 | | No log | 4.65 | 2000 | 0.0074 | 0.9899 | 0.98 | 0.9849 | 0.0220 | | No log | 4.65 | 2000 | 0.0187 | 0.9946 | 0.925 | 0.9585 | 0.4 | | No log | 4.65 | 2000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 | | No log | 4.65 | 2000 | 0.0132 | 0.9497 | 0.945 | 0.9474 | 0.097 | | No log | 4.65 | 2000 | 0.0006 | 1.0 | 1.0 | 1.0 | 0.5 | | No log | 4.65 | 2000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 | | No log | 4.65 | 2000 | 0.0026 | 0.99 | 0.99 | 0.99 | 0.2 | | No log | 4.65 | 2000 | 0.0028 | 0.9949 | 0.98 | 0.9874 | 0.9 | | No log | 4.65 | 2000 | 0.0018 | 0.9851 | 0.99 | 0.9875 | 0.3000 | | No log | 4.65 | 2000 | 0.0384 | 0.9261 | 0.815 | 0.8670 | 0.7000 | | No log | 4.65 | 2000 | 0.0007 | 0.9950 | 1.0 | 0.9975 | 0.0260 | | No log | 4.65 | 2000 | 0.0022 | 0.985 | 0.985 | 0.985 | 0.3000 | | No log | 4.65 | 2000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.0730 | | No log | 4.65 | 2000 | 0.0009 | 1.0 | 0.995 | 0.9975 | 0.8 | | No log | 4.65 | 2000 | 0.0017 | 0.9901 | 1.0 | 0.9950 | 0.023 | | No log | 4.65 | 2000 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.0860 | | No log | 4.65 | 2000 | 0.0047 | 0.9559 | 0.975 | 0.9653 | 0.5 | | No log | 4.65 | 2000 | 0.0013 | 0.9901 | 1.0 | 0.9950 | 0.2 | | No log | 4.65 | 2000 | 0.0185 | 0.9948 | 0.95 | 0.9719 | 0.2 | | No log | 4.65 | 2000 | 0.0008 | 1.0 | 1.0 | 1.0 | 0.6 | | No log | 4.65 | 2000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.039 | | No log | 4.65 | 2000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.002 | | No log | 4.65 | 2000 | 0.0012 | 1.0 | 1.0 | 1.0 | 0.7000 | | No log | 4.65 | 2000 | 0.0047 | 0.9615 | 1.0 | 0.9804 | 0.011 | | No log | 4.65 | 2000 | 0.0008 | 1.0 | 0.995 | 0.9975 | 0.7000 | | No log | 4.65 | 2000 | 0.0053 | 0.9792 | 1.0 | 0.9895 | 0.4 | | No log | 4.65 | 2000 | 0.0055 | 0.9041 | 0.99 | 0.9451 | 0.083 | | No log | 4.65 | 2000 | 0.0106 | 0.9606 | 0.975 | 0.9677 | 0.9 | | No log | 4.65 | 2000 | 0.0019 | 0.995 | 0.995 | 0.995 | 0.6 | | No log | 4.65 | 2000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 | | No log | 4.65 | 2000 | 0.0024 | 1.0 | 0.985 | 0.9924 | 0.9 | | No log | 4.65 | 2000 | 0.0015 | 0.9949 | 0.985 | 0.9899 | 0.5 | | No log | 4.65 | 2000 | 0.0174 | 0.9320 | 0.96 | 0.9458 | 0.8 | | No log | 4.65 | 2000 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.7000 | | No log | 4.65 | 2000 | 0.0028 | 0.99 | 0.99 | 0.99 | 0.7000 | | No log | 4.65 | 2000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.002 | | No log | 4.65 | 2000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.0370 | | No log | 4.65 | 2000 | 0.0108 | 0.9561 | 0.98 | 0.9679 | 0.2 | | No log | 4.65 | 2000 | 0.0018 | 0.9917 | 1.0 | 0.9959 | 0.003 | | No log | 4.65 | 2000 | 0.0223 | 0.8780 | 0.9 | 0.8889 | 0.2 | | No log | 4.65 | 2000 | 0.0104 | 0.9950 | 1.0 | 0.9975 | 0.003 | | No log | 4.65 | 2000 | 0.0009 | 0.9950 | 1.0 | 0.9975 | 0.8 | | No log | 4.65 | 2000 | 0.0010 | 0.9901 | 1.0 | 0.9950 | 0.0220 | | No log | 4.65 | 2000 | 0.0010 | 1.0 | 0.995 | 0.9975 | 0.5 | | No log | 4.65 | 2000 | 0.0176 | 0.7375 | 0.955 | 0.8322 | 0.0090 | | No log | 4.65 | 2000 | 0.0196 | 0.9798 | 0.97 | 0.9749 | 0.008 | | No log | 4.65 | 2000 | 0.0076 | 0.9845 | 0.95 | 0.9669 | 0.6 | | No log | 4.65 | 2000 | 0.0184 | 0.7574 | 0.7766 | 0.7669 | 0.7000 | | No log | 4.65 | 2000 | 0.0132 | 0.8294 | 0.875 | 0.8516 | 0.4 | | No log | 4.65 | 2000 | 0.0616 | 0.7553 | 0.895 | 0.8192 | 0.3000 | | No log | 4.65 | 2000 | 0.1522 | 0.7 | 0.625 | 0.6604 | 0.2 | | No log | 4.65 | 2000 | 0.0091 | 0.9282 | 0.905 | 0.9165 | 0.6 | | No log | 4.65 | 2000 | 0.1114 | 0.7237 | 0.8777 | 0.7933 | 0.2 | | No log | 4.65 | 2000 | 0.0412 | 0.8601 | 0.83 | 0.8448 | 0.5 | | No log | 4.65 | 2000 | 0.0408 | 0.7725 | 0.815 | 0.7932 | 0.3000 | | No log | 4.65 | 2000 | 0.0390 | 0.8529 | 0.87 | 0.8614 | 0.3000 | | No log | 4.65 | 2000 | 0.0388 | 0.8476 | 0.89 | 0.8683 | 0.4 | | No log | 4.65 | 2000 | 0.0322 | 0.8641 | 0.795 | 0.8281 | 0.4 | | No log | 4.65 | 2000 | 0.0494 | 0.7714 | 0.81 | 0.7902 | 0.3000 | | No log | 4.65 | 2000 | 0.0309 | 0.7290 | 0.78 | 0.7536 | 0.4 | | No log | 4.65 | 2000 | 0.0576 | 0.7487 | 0.7 | 0.7235 | 0.5 | | No log | 4.65 | 2000 | 0.0630 | 0.7580 | 0.83 | 0.7924 | 0.3000 | | No log | 4.65 | 2000 | 0.0065 | 0.9848 | 0.975 | 0.9799 | 0.2 | | No log | 4.65 | 2000 | 0.0527 | 0.8282 | 0.675 | 0.7438 | 0.7000 | | No log | 4.65 | 2000 | 0.0339 | 0.7959 | 0.78 | 0.7879 | 0.4 | | No log | 4.65 | 2000 | 0.0379 | 0.75 | 0.7727 | 0.7612 | 0.4 | | No log | 4.65 | 2000 | 0.0648 | 0.7059 | 0.7839 | 0.7429 | 0.4 | | No log | 4.65 | 2000 | 0.0328 | 0.7947 | 0.755 | 0.7744 | 0.4 | | No log | 4.65 | 2000 | 0.0258 | 0.8190 | 0.86 | 0.8390 | 0.3000 | | No log | 4.65 | 2000 | 0.0092 | 0.945 | 0.945 | 0.945 | 0.4 | | No log | 4.65 | 2000 | 0.0392 | 0.6653 | 0.835 | 0.7406 | 0.3000 | | No log | 4.65 | 2000 | 0.0403 | 0.7231 | 0.875 | 0.7919 | 0.2 | | No log | 4.65 | 2000 | 0.0419 | 0.8757 | 0.81 | 0.8416 | 0.6 | | No log | 4.65 | 2000 | 0.0333 | 0.7240 | 0.695 | 0.7092 | 0.5 | | No log | 4.65 | 2000 | 0.0404 | 0.7352 | 0.805 | 0.7685 | 0.4 | | No log | 4.65 | 2000 | 0.1122 | 0.7865 | 0.755 | 0.7704 | 0.016 | | No log | 4.65 | 2000 | 0.0879 | 0.6701 | 0.65 | 0.6599 | 0.3000 | | No log | 4.65 | 2000 | 0.0249 | 0.9793 | 0.945 | 0.9618 | 0.2 | | No log | 4.65 | 2000 | 0.0011 | 1.0 | 1.0 | 1.0 | 0.032 | | No log | 4.65 | 2000 | 0.0182 | 0.9369 | 0.965 | 0.9507 | 0.2 | | No log | 4.65 | 2000 | 0.0542 | 0.7277 | 0.735 | 0.7313 | 0.4 | | No log | 4.65 | 2000 | 0.0578 | 0.6496 | 0.76 | 0.7005 | 0.4 | | No log | 4.65 | 2000 | 0.1205 | 0.6182 | 0.7234 | 0.6667 | 0.3000 | | No log | 4.65 | 2000 | 0.0380 | 0.6723 | 0.595 | 0.6313 | 0.4 | | No log | 4.65 | 2000 | 0.0285 | 0.8743 | 0.835 | 0.8542 | 0.5 | | No log | 4.65 | 2000 | 0.0363 | 0.8341 | 0.905 | 0.8681 | 0.4 | | No log | 4.65 | 2000 | 0.0104 | 0.9602 | 0.965 | 0.9626 | 0.5 | | No log | 4.65 | 2000 | 0.0410 | 0.8342 | 0.83 | 0.8321 | 0.4 | | No log | 4.65 | 2000 | 0.0331 | 0.6714 | 0.705 | 0.6878 | 0.5 | | No log | 4.65 | 2000 | 0.0826 | 0.5984 | 0.73 | 0.6577 | 0.3000 | | No log | 4.65 | 2000 | 0.0301 | 0.9016 | 0.87 | 0.8855 | 0.5 | | No log | 4.65 | 2000 | 0.0273 | 0.7812 | 0.875 | 0.8255 | 0.4 | | No log | 4.65 | 2000 | 0.0122 | 0.9077 | 0.9833 | 0.944 | 0.2 | | No log | 4.65 | 2000 | 0.0179 | 0.9021 | 0.875 | 0.8883 | 0.6 | | No log | 4.65 | 2000 | 0.0327 | 0.9402 | 0.865 | 0.9010 | 0.7000 | | No log | 4.65 | 2000 | 0.0102 | 0.8346 | 0.925 | 0.8775 | 0.2 | | No log | 4.65 | 2000 | 0.0252 | 0.8578 | 0.905 | 0.8808 | 0.2 | | No log | 4.65 | 2000 | 0.0638 | 0.8182 | 0.9 | 0.8571 | 0.2 | | No log | 4.65 | 2000 | 0.0587 | 0.6203 | 0.735 | 0.6728 | 0.3000 | | No log | 4.65 | 2000 | 0.0038 | 0.9802 | 0.99 | 0.9851 | 0.4 | | No log | 4.65 | 2000 | 0.0945 | 0.6036 | 0.67 | 0.6351 | 0.4 | | No log | 4.65 | 2000 | 0.0483 | 0.6842 | 0.325 | 0.4407 | 0.5 | | No log | 4.65 | 2000 | 0.0694 | 0.7009 | 0.82 | 0.7558 | 0.016 | | No log | 4.65 | 2000 | 0.1292 | 0.6245 | 0.815 | 0.7072 | 0.2 | | No log | 4.65 | 2000 | 0.1005 | 0.5288 | 0.78 | 0.6303 | 0.0860 | | No log | 4.65 | 2000 | 0.0837 | 0.5606 | 0.74 | 0.6379 | 0.097 | | No log | 4.65 | 2000 | 0.1527 | 0.7364 | 0.81 | 0.7714 | 0.034 | | No log | 4.65 | 2000 | 0.0721 | 0.6314 | 0.805 | 0.7077 | 0.3000 | | No log | 4.65 | 2000 | 0.0558 | 0.7155 | 0.83 | 0.7685 | 0.3000 | | No log | 4.65 | 2000 | 0.0509 | 0.4717 | 0.5025 | 0.4866 | 0.3000 | | No log | 4.65 | 2000 | 0.0629 | 0.6546 | 0.815 | 0.7261 | 0.3000 | | No log | 4.65 | 2000 | 0.0842 | 0.5256 | 0.7455 | 0.6165 | 0.2 | | No log | 4.65 | 2000 | 0.0650 | 0.7177 | 0.89 | 0.7946 | 0.095 | | No log | 4.65 | 2000 | 0.0650 | 0.7177 | 0.89 | 0.7946 | 0.095 | | No log | 4.65 | 2000 | 0.0552 | 0.7301 | 0.825 | 0.7746 | 0.2 | | No log | 4.65 | 2000 | 0.0753 | 0.7178 | 0.865 | 0.7846 | 0.09 | | No log | 4.65 | 2000 | 0.0522 | 0.7 | 0.77 | 0.7333 | 0.2 | | No log | 4.65 | 2000 | 0.0542 | 0.7525 | 0.76 | 0.7562 | 0.4 | | No log | 4.65 | 2000 | 0.1020 | 0.6245 | 0.715 | 0.6667 | 0.2 | | No log | 4.65 | 2000 | 0.0628 | 0.6107 | 0.855 | 0.7125 | 0.2 | | No log | 4.65 | 2000 | 0.0542 | 0.8317 | 0.84 | 0.8358 | 0.4 | | No log | 4.65 | 2000 | 0.1749 | 0.5290 | 0.685 | 0.5969 | 0.097 | | No log | 4.65 | 2000 | 0.0773 | 0.6587 | 0.83 | 0.7345 | 0.098 | | No log | 4.65 | 2000 | 0.1323 | 0.6826 | 0.785 | 0.7302 | 0.081 | | No log | 4.65 | 2000 | 0.0655 | 0.6797 | 0.785 | 0.7285 | 0.1 | | No log | 4.65 | 2000 | 0.0655 | 0.6797 | 0.785 | 0.7285 | 0.1 | | No log | 4.65 | 2000 | 0.0389 | 0.5172 | 0.6522 | 0.5769 | 0.3000 | | No log | 4.65 | 2000 | 0.0389 | 0.5172 | 0.6522 | 0.5769 | 0.3000 | | No log | 4.65 | 2000 | 0.0583 | 0.7486 | 0.67 | 0.7071 | 0.4 | | No log | 4.65 | 2000 | 0.0509 | 0.5674 | 0.4 | 0.4692 | 0.6 | | No log | 4.65 | 2000 | 0.0783 | 0.3429 | 0.9231 | 0.5000 | 0.011 | | No log | 4.65 | 2000 | 0.0520 | 0.5936 | 0.745 | 0.6608 | 0.4 | | No log | 4.65 | 2000 | 0.0456 | 0.5152 | 0.765 | 0.6157 | 0.2 | | No log | 4.65 | 2000 | 0.0643 | 0.7540 | 0.705 | 0.7287 | 0.4 | | No log | 4.65 | 2000 | 0.0547 | 0.7240 | 0.695 | 0.7092 | 0.7000 | | No log | 4.65 | 2000 | 0.0756 | 0.6753 | 0.78 | 0.7239 | 0.3000 | | No log | 4.65 | 2000 | 0.1269 | 0.2325 | 0.5096 | 0.3193 | 0.065 | | No log | 4.65 | 2000 | 0.1097 | 0.6613 | 0.82 | 0.7321 | 0.094 | | No log | 4.65 | 2000 | 0.0697 | 0.6887 | 0.73 | 0.7087 | 0.5 | | No log | 4.65 | 2000 | 0.3091 | 0.4182 | 0.575 | 0.4842 | 0.001 | | No log | 4.65 | 2000 | 0.1204 | 0.7168 | 0.81 | 0.7606 | 0.7000 | | No log | 4.65 | 2000 | 0.0879 | 0.5923 | 0.77 | 0.6696 | 0.2 | | No log | 4.65 | 2000 | 0.1703 | 0.7736 | 0.82 | 0.7961 | 0.064 | | No log | 4.65 | 2000 | 0.1559 | 1.0 | 0.18 | 0.3051 | 0.9 | | No log | 4.65 | 2000 | 0.0868 | 0.4070 | 0.58 | 0.4784 | 0.07 | | No log | 4.65 | 2000 | 0.0716 | 0.7942 | 0.965 | 0.8713 | 0.8 | | No log | 4.65 | 2000 | 0.0608 | 0.4490 | 0.55 | 0.4944 | 0.034 | | No log | 4.65 | 2000 | 0.0774 | 0.75 | 0.765 | 0.7574 | 0.3000 | | No log | 4.65 | 2000 | 0.1107 | 0.4737 | 0.75 | 0.5806 | 0.2 | | No log | 4.65 | 2000 | 0.0693 | 0.66 | 0.825 | 0.7333 | 0.2 | | No log | 4.65 | 2000 | 0.0587 | 0.8098 | 0.745 | 0.7760 | 0.5 | | No log | 4.65 | 2000 | 0.0948 | 0.5385 | 0.63 | 0.5806 | 0.098 | | No log | 4.65 | 2000 | 0.0644 | 0.7299 | 0.77 | 0.7494 | 0.3000 | | No log | 4.65 | 2000 | 0.0563 | 0.5297 | 0.535 | 0.5323 | 0.5 | | No log | 4.65 | 2000 | 0.0430 | 0.7695 | 0.935 | 0.8442 | 0.064 | | No log | 4.65 | 2000 | 0.1220 | 0.3489 | 0.56 | 0.4299 | 0.4 | | No log | 4.65 | 2000 | 0.0578 | 0.7633 | 0.7940 | 0.7783 | 0.4 | | No log | 4.65 | 2000 | 0.1000 | 0.395 | 0.7670 | 0.5215 | 0.046 | | No log | 4.65 | 2000 | 0.1451 | 0.8511 | 0.2 | 0.3239 | 0.9 | | No log | 4.65 | 2000 | 0.0596 | 0.7165 | 0.695 | 0.7056 | 0.3000 | | No log | 4.65 | 2000 | 0.0811 | 0.7793 | 0.83 | 0.8039 | 0.2 | | No log | 4.65 | 2000 | 0.0811 | 0.7793 | 0.83 | 0.8039 | 0.2 | | No log | 4.65 | 2000 | 0.0602 | 0.5167 | 0.695 | 0.5928 | 0.0860 | | No log | 4.65 | 2000 | 0.0840 | 0.6830 | 0.9141 | 0.7819 | 0.099 | | No log | 4.65 | 2000 | 0.0581 | 0.6293 | 0.7337 | 0.6775 | 0.4 | | No log | 4.65 | 2000 | 0.0613 | 0.6783 | 0.875 | 0.7642 | 0.3000 | | No log | 4.65 | 2000 | 0.0640 | 0.7665 | 0.755 | 0.7607 | 0.3000 | | No log | 4.65 | 2000 | 0.0495 | 0.8225 | 0.695 | 0.7534 | 0.5 | | No log | 4.65 | 2000 | 0.0587 | 0.5502 | 0.685 | 0.6102 | 0.4 | | No log | 4.65 | 2000 | 0.0670 | 0.6885 | 0.84 | 0.7568 | 0.3000 | | No log | 4.65 | 2000 | 0.0567 | 0.7566 | 0.855 | 0.8028 | 0.3000 | | No log | 4.65 | 2000 | 0.0533 | 0.7684 | 0.73 | 0.7487 | 0.4 | | No log | 4.65 | 2000 | 0.0460 | 0.8276 | 0.72 | 0.7701 | 0.5 | | No log | 4.65 | 2000 | 0.0947 | 0.6403 | 0.81 | 0.7152 | 0.075 | | No log | 4.65 | 2000 | 0.0541 | 0.5610 | 0.46 | 0.5055 | 0.4 | | No log | 4.65 | 2000 | 0.0633 | 0.4784 | 0.665 | 0.5565 | 0.3000 | | No log | 4.65 | 2000 | 0.0789 | 0.6322 | 0.765 | 0.6923 | 0.099 | | No log | 4.65 | 2000 | 0.0683 | 0.4971 | 0.86 | 0.6300 | 0.0720 | | No log | 4.65 | 2000 | 0.0672 | 0.7846 | 0.765 | 0.7747 | 0.6 | | No log | 4.65 | 2000 | 0.0507 | 0.7573 | 0.78 | 0.7685 | 0.4 | | No log | 4.65 | 2000 | 0.1035 | 0.5233 | 0.675 | 0.5895 | 0.3000 | | No log | 4.65 | 2000 | 0.0867 | 0.6811 | 0.865 | 0.7621 | 0.069 | | No log | 4.65 | 2000 | 0.0560 | 0.5714 | 0.82 | 0.6735 | 0.3000 | | No log | 4.65 | 2000 | 0.0560 | 0.5714 | 0.82 | 0.6735 | 0.3000 | | No log | 4.65 | 2000 | 0.0560 | 0.5714 | 0.82 | 0.6735 | 0.3000 | | No log | 4.65 | 2000 | 0.0560 | 0.5714 | 0.82 | 0.6735 | 0.3000 | | No log | 4.65 | 2000 | 0.2802 | 0.2650 | 0.4874 | 0.3434 | 0.001 | | No log | 4.65 | 2000 | 0.0975 | 0.6107 | 0.7487 | 0.6727 | 0.068 | | No log | 4.65 | 2000 | 0.0218 | 0.9353 | 0.94 | 0.9377 | 0.3000 | | No log | 4.65 | 2000 | 0.0020 | 0.9901 | 1.0 | 0.9950 | 0.3000 | | No log | 4.65 | 2000 | 0.0034 | 1.0 | 0.99 | 0.9950 | 0.2 | | No log | 4.65 | 2000 | 0.0004 | 0.9950 | 1.0 | 0.9975 | 0.2 | | No log | 4.65 | 2000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.081 | | No log | 4.65 | 2000 | 0.0010 | 0.9950 | 1.0 | 0.9975 | 0.4 | | No log | 4.65 | 2000 | 0.0021 | 1.0 | 0.995 | 0.9975 | 0.7000 | | No log | 4.65 | 2000 | 0.0038 | 0.99 | 0.99 | 0.99 | 0.8 | | No log | 4.65 | 2000 | 0.0040 | 0.9899 | 0.985 | 0.9875 | 0.2 | | No log | 4.65 | 2000 | 0.0059 | 0.9851 | 0.995 | 0.9900 | 0.008 | | No log | 4.65 | 2000 | 0.0267 | 0.9303 | 0.935 | 0.9327 | 0.029 | | No log | 4.65 | 2000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 | | No log | 4.65 | 2000 | 0.0568 | 0.7978 | 0.71 | 0.7513 | 0.2 | | No log | 4.65 | 2000 | 0.0016 | 0.9950 | 1.0 | 0.9975 | 0.8 | | No log | 4.65 | 2000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.3000 | | No log | 4.65 | 2000 | 0.0039 | 1.0 | 0.985 | 0.9924 | 0.5 | | No log | 4.65 | 2000 | 0.0080 | 0.9602 | 0.965 | 0.9626 | 0.8 | | No log | 4.65 | 2000 | 0.0010 | 0.995 | 0.995 | 0.995 | 0.3000 | | No log | 4.65 | 2000 | 0.0006 | 1.0 | 0.995 | 0.9975 | 0.7000 | | No log | 4.65 | 2000 | 0.0028 | 0.995 | 0.995 | 0.995 | 0.2 | | No log | 4.65 | 2000 | 0.0015 | 0.995 | 0.995 | 0.995 | 0.4 | | No log | 4.65 | 2000 | 0.0303 | 0.9242 | 0.915 | 0.9196 | 0.0090 | | No log | 4.65 | 2000 | 0.1471 | 0.3239 | 0.63 | 0.4278 | 0.6 | | No log | 4.65 | 2000 | 0.1173 | 0.4535 | 0.2690 | 0.3377 | 0.3000 | | No log | 4.65 | 2000 | 0.1420 | 0.6164 | 0.675 | 0.6444 | 0.3000 | | No log | 4.65 | 2000 | 0.1245 | 0.4897 | 0.835 | 0.6174 | 0.2 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1"], "model-index": [{"name": "v2-WtP-FT-6L-256BS-UD-cUD-Opus-cOpus", "results": []}]}
igorsterner/v2-WtP-FT-6L-256BS-UD-cUD-Opus-cOpus
null
[ "transformers", "safetensors", "xlm-token", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-23T14:42:22+00:00
[]
[]
TAGS #transformers #safetensors #xlm-token #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
v2-WtP-FT-6L-256BS-UD-cUD-Opus-cOpus ==================================== This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1245 * Precision: 0.4897 * Recall: 0.835 * F1: 0.6174 * Threshold: 0.2 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 512 * eval\_batch\_size: 512 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.39.1 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 512\n* eval\\_batch\\_size: 512\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #xlm-token #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 512\n* eval\\_batch\\_size: 512\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-8B"}
AlienKevin/Meta-Llama-3-8B-qlora-lang
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B", "region:us" ]
null
2024-04-23T14:44:02+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-8B #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-8B #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
null
null
# sbs2680/Emollama-chat-13b-Q6_K-GGUF This model was converted to GGUF format from [`lzw1008/Emollama-chat-13b`](https://huggingface.co/lzw1008/Emollama-chat-13b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/lzw1008/Emollama-chat-13b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo sbs2680/Emollama-chat-13b-Q6_K-GGUF --model emollama-chat-13b.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo sbs2680/Emollama-chat-13b-Q6_K-GGUF --model emollama-chat-13b.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m emollama-chat-13b.Q6_K.gguf -n 128 ```
{"language": ["en"], "license": "mit", "tags": ["llama-cpp", "gguf-my-repo"]}
sbs2680/Emollama-chat-13b-Q6_K-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "license:mit", "region:us" ]
null
2024-04-23T14:44:26+00:00
[]
[ "en" ]
TAGS #gguf #llama-cpp #gguf-my-repo #en #license-mit #region-us
# sbs2680/Emollama-chat-13b-Q6_K-GGUF This model was converted to GGUF format from 'lzw1008/Emollama-chat-13b' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# sbs2680/Emollama-chat-13b-Q6_K-GGUF\nThis model was converted to GGUF format from 'lzw1008/Emollama-chat-13b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #en #license-mit #region-us \n", "# sbs2680/Emollama-chat-13b-Q6_K-GGUF\nThis model was converted to GGUF format from 'lzw1008/Emollama-chat-13b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper tiny mozilla-foundation/common_voice_11_0 - Huang Jordan This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.4430 - Cer: 22.6389 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.4622 | 0.7092 | 500 | 0.4827 | 24.3222 | | 0.3287 | 1.4184 | 1000 | 0.4569 | 22.5015 | | 0.2613 | 2.1277 | 1500 | 0.4454 | 22.3270 | | 0.24 | 2.8369 | 2000 | 0.4430 | 22.6389 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["zh"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_11_0"], "base_model": "openai/whisper-tiny", "model-index": [{"name": "Whisper tiny mozilla-foundation/common_voice_11_0 - Huang Jordan", "results": []}]}
HuangJordan/whisper-tiny-chinese-cer
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "zh", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-23T14:45:07+00:00
[]
[ "zh" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #zh #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-tiny #license-apache-2.0 #endpoints_compatible #region-us
Whisper tiny mozilla-foundation/common\_voice\_11\_0 - Huang Jordan =================================================================== This model is a fine-tuned version of openai/whisper-tiny on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: * Loss: 0.4430 * Cer: 22.6389 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 200 * training\_steps: 2000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.2+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* training\\_steps: 2000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #zh #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-tiny #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* training\\_steps: 2000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
adapter-transformers
# Adapter `Pubudu/mbart-large-cc25_prefix_tuning_12_par_bn_rf_2_army_first3` for facebook/mbart-large-cc25 An [adapter](https://adapterhub.ml) for the `facebook/mbart-large-cc25` model that was trained on the [summarization/army_5100_first3](https://adapterhub.ml/explore/summarization/army_5100_first3/) dataset. This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. ## Usage First, install `adapters`: ``` pip install -U adapters ``` Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("facebook/mbart-large-cc25") adapter_name = model.load_adapter("Pubudu/mbart-large-cc25_prefix_tuning_12_par_bn_rf_2_army_first3", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
{"tags": ["adapterhub:summarization/army_5100_first3", "adapter-transformers", "mbart"], "datasets": ["army_5100_first3"]}
Pubudu/mbart-large-cc25_prefix_tuning_12_par_bn_rf_2_army_first3
null
[ "adapter-transformers", "adapterhub:summarization/army_5100_first3", "mbart", "dataset:army_5100_first3", "region:us" ]
null
2024-04-23T14:47:07+00:00
[]
[]
TAGS #adapter-transformers #adapterhub-summarization/army_5100_first3 #mbart #dataset-army_5100_first3 #region-us
# Adapter 'Pubudu/mbart-large-cc25_prefix_tuning_12_par_bn_rf_2_army_first3' for facebook/mbart-large-cc25 An adapter for the 'facebook/mbart-large-cc25' model that was trained on the summarization/army_5100_first3 dataset. This adapter was created for usage with the Adapters library. ## Usage First, install 'adapters': Now, the adapter can be loaded and activated like this: ## Architecture & Training ## Evaluation results
[ "# Adapter 'Pubudu/mbart-large-cc25_prefix_tuning_12_par_bn_rf_2_army_first3' for facebook/mbart-large-cc25\n\nAn adapter for the 'facebook/mbart-large-cc25' model that was trained on the summarization/army_5100_first3 dataset.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
[ "TAGS\n#adapter-transformers #adapterhub-summarization/army_5100_first3 #mbart #dataset-army_5100_first3 #region-us \n", "# Adapter 'Pubudu/mbart-large-cc25_prefix_tuning_12_par_bn_rf_2_army_first3' for facebook/mbart-large-cc25\n\nAn adapter for the 'facebook/mbart-large-cc25' model that was trained on the summarization/army_5100_first3 dataset.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
text-to-speech
transformers
<img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Parler-TTS Mini v0.1 <a target="_blank" href="https://huggingface.co/spaces/parler-tts/parler_tts_mini"> <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/> </a> **Parler-TTS Mini v0.1** is a lightweight text-to-speech (TTS) model, trained on 10.5K hours of audio data, that can generate high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation). It is the first release model from the [Parler-TTS](https://github.com/huggingface/parler-tts) project, which aims to provide the community with TTS training resources and dataset pre-processing code. ## Usage Using Parler-TTS is as simple as "bonjour". Simply install the library once: ```sh pip install git+https://github.com/huggingface/parler-tts.git ``` You can then use the model with the following inference snippet: ```py import torch from parler_tts import ParlerTTSForConditionalGeneration from transformers import AutoTokenizer import soundfile as sf device = "cuda:0" if torch.cuda.is_available() else "cpu" model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler_tts_mini_v0.1").to(device) tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler_tts_mini_v0.1") prompt = "Hey, how are you doing today?" description = "A female speaker with a slightly low-pitched voice delivers her words quite expressively, in a very confined sounding environment with clear audio quality. She speaks very fast." input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device) prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device) generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids) audio_arr = generation.cpu().numpy().squeeze() sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate) ``` **Tips**: * Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise * Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech * The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt ## Motivation Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively. Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models. Parler-TTS was released alongside: * [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model. * [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets. * [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints. ## Citation If you found this repository useful, please consider citing this work and also the original Stability AI paper: ``` @misc{lacombe-etal-2024-parler-tts, author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi}, title = {Parler-TTS}, year = {2024}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/huggingface/parler-tts}} } ``` ``` @misc{lyth2024natural, title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations}, author={Dan Lyth and Simon King}, year={2024}, eprint={2402.01912}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` ## License This model is permissively licensed under the Apache 2.0 license.
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["text-to-speech", "annotation"], "datasets": ["parler-tts/mls_eng_10k", "blabble-io/libritts_r", "parler-tts/libritts_r_tags_tagged_10k_generated", "parler-tts/mls-eng-10k-tags_tagged_10k_generated"], "pipeline_tag": "text-to-speech", "inference": false}
ipsilondev/parler_tts
null
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "text-to-speech", "annotation", "en", "dataset:parler-tts/mls_eng_10k", "dataset:blabble-io/libritts_r", "dataset:parler-tts/libritts_r_tags_tagged_10k_generated", "dataset:parler-tts/mls-eng-10k-tags_tagged_10k_generated", "arxiv:2402.01912", "license:apache-2.0", "autotrain_compatible", "region:us" ]
null
2024-04-23T14:47:11+00:00
[ "2402.01912" ]
[ "en" ]
TAGS #transformers #safetensors #parler_tts #text2text-generation #text-to-speech #annotation #en #dataset-parler-tts/mls_eng_10k #dataset-blabble-io/libritts_r #dataset-parler-tts/libritts_r_tags_tagged_10k_generated #dataset-parler-tts/mls-eng-10k-tags_tagged_10k_generated #arxiv-2402.01912 #license-apache-2.0 #autotrain_compatible #region-us
<img src="URL alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Parler-TTS Mini v0.1 <a target="_blank" href="URL <img src="URL alt="Open in HuggingFace"/> </a> Parler-TTS Mini v0.1 is a lightweight text-to-speech (TTS) model, trained on 10.5K hours of audio data, that can generate high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation). It is the first release model from the Parler-TTS project, which aims to provide the community with TTS training resources and dataset pre-processing code. ## Usage Using Parler-TTS is as simple as "bonjour". Simply install the library once: You can then use the model with the following inference snippet: Tips: * Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise * Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech * The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt ## Motivation Parler-TTS is a reproduction of work from the paper Natural language guidance of high-fidelity text-to-speech with synthetic annotations by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively. Contrarily to other TTS models, Parler-TTS is a fully open-source release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models. Parler-TTS was released alongside: * The Parler-TTS repository - you can train and fine-tuned your own version of the model. * The Data-Speech repository - a suite of utility scripts designed to annotate speech datasets. * The Parler-TTS organization - where you can find the annotated datasets as well as the future checkpoints. If you found this repository useful, please consider citing this work and also the original Stability AI paper: ## License This model is permissively licensed under the Apache 2.0 license.
[ "# Parler-TTS Mini v0.1\n\n<a target=\"_blank\" href=\"URL\n <img src=\"URL alt=\"Open in HuggingFace\"/>\n</a>\n\nParler-TTS Mini v0.1 is a lightweight text-to-speech (TTS) model, trained on 10.5K hours of audio data, that can generate high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation).\nIt is the first release model from the Parler-TTS project, which aims to provide the community with TTS training resources and dataset pre-processing code.", "## Usage\n\nUsing Parler-TTS is as simple as \"bonjour\". Simply install the library once:\n\n\n\nYou can then use the model with the following inference snippet:\n\n\n\nTips:\n* Include the term \"very clear audio\" to generate the highest quality audio, and \"very noisy audio\" for high levels of background noise\n* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech\n* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt", "## Motivation\n\nParler-TTS is a reproduction of work from the paper Natural language guidance of high-fidelity text-to-speech with synthetic annotations by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively. \n\nContrarily to other TTS models, Parler-TTS is a fully open-source release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.\nParler-TTS was released alongside:\n* The Parler-TTS repository - you can train and fine-tuned your own version of the model.\n* The Data-Speech repository - a suite of utility scripts designed to annotate speech datasets.\n* The Parler-TTS organization - where you can find the annotated datasets as well as the future checkpoints.\n\nIf you found this repository useful, please consider citing this work and also the original Stability AI paper:", "## License\n\nThis model is permissively licensed under the Apache 2.0 license." ]
[ "TAGS\n#transformers #safetensors #parler_tts #text2text-generation #text-to-speech #annotation #en #dataset-parler-tts/mls_eng_10k #dataset-blabble-io/libritts_r #dataset-parler-tts/libritts_r_tags_tagged_10k_generated #dataset-parler-tts/mls-eng-10k-tags_tagged_10k_generated #arxiv-2402.01912 #license-apache-2.0 #autotrain_compatible #region-us \n", "# Parler-TTS Mini v0.1\n\n<a target=\"_blank\" href=\"URL\n <img src=\"URL alt=\"Open in HuggingFace\"/>\n</a>\n\nParler-TTS Mini v0.1 is a lightweight text-to-speech (TTS) model, trained on 10.5K hours of audio data, that can generate high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation).\nIt is the first release model from the Parler-TTS project, which aims to provide the community with TTS training resources and dataset pre-processing code.", "## Usage\n\nUsing Parler-TTS is as simple as \"bonjour\". Simply install the library once:\n\n\n\nYou can then use the model with the following inference snippet:\n\n\n\nTips:\n* Include the term \"very clear audio\" to generate the highest quality audio, and \"very noisy audio\" for high levels of background noise\n* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech\n* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt", "## Motivation\n\nParler-TTS is a reproduction of work from the paper Natural language guidance of high-fidelity text-to-speech with synthetic annotations by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively. \n\nContrarily to other TTS models, Parler-TTS is a fully open-source release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.\nParler-TTS was released alongside:\n* The Parler-TTS repository - you can train and fine-tuned your own version of the model.\n* The Data-Speech repository - a suite of utility scripts designed to annotate speech datasets.\n* The Parler-TTS organization - where you can find the annotated datasets as well as the future checkpoints.\n\nIf you found this repository useful, please consider citing this work and also the original Stability AI paper:", "## License\n\nThis model is permissively licensed under the Apache 2.0 license." ]
text-generation
transformers
## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
{"language": ["en"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"}
mattshumer/Meta-Llama-3-8B-Instruct-LongTest
null
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T14:48:37+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #facebook #meta #pytorch #llama-3 #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Model Details ------------- Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. Model developers Meta Variations Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. Input Models input text only. Output Models generate text and code only. Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. Llama 3 family of models. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. Model Release Date April 18, 2024. Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. License A custom commercial license is available at: URL Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go here. Intended Use ------------ Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English. Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. How to use ---------- This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original 'llama3' codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both. #### Transformers pipeline #### Transformers AutoModelForCausalLM ### Use with 'llama3' Please, follow the instructions in the repository To download Original checkpoints, see the example command below leveraging 'huggingface-cli': For Hugging Face support, we recommend using transformers or TGI, but a similar command works. Hardware and Software --------------------- Training Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. Carbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. CO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. Training Data ------------- Overview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. Data Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. Benchmarks ---------- In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here. ### Base pretrained models ### Instruction tuned models ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. Safety For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. Refusals In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL #### Critical risks CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### Cyber Security We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability. ### Child Safety Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository. Finally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community. Ethical Considerations and Limitations -------------------------------------- The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at URL instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {URL } Contributors ------------ Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
[ "### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.", "#### Transformers pipeline", "#### Transformers AutoModelForCausalLM", "### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.", "### Base pretrained models", "### Instruction tuned models", "### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.", "#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.", "#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL", "#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).", "### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.", "### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.", "### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #facebook #meta #pytorch #llama-3 #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.", "#### Transformers pipeline", "#### Transformers AutoModelForCausalLM", "### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.", "### Base pretrained models", "### Instruction tuned models", "### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.", "#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.", "#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL", "#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).", "### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.", "### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.", "### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos" ]
text-generation
transformers
big thanks to lore for the 8xH100 gpus ## training base model is meta llama 3 8b instruct trained on pippa then i trained that model on limarp, both at 8k context for 2 epochs each ## gen settings i would **start with** every sampler off and **temperature at 1 and just make min p 0.05**, i got good prompts from this but u can also try to gen settings from shori which are copy pasted below - **Main choice** (may have repetition issues) - **Temperature**: 1.0; **Min-P**: 0.05-0.10; **Presence Penalty**: 0.35-0.45 - **Alternative 1** (appears to solve repetition issues while being coherent, but reponses might possibly be less truthful) - **Temperature**: 2.40-2.50; **Min-P**: 0.40; **Frequency penalty**: 0.10-0.15; Temperature last. - **Alternative 2** - **Mirostat type**: 2, **Mirostat Tau**: 2.80-3.00; **Mirostat Eta**: 0.0175-0.0200; neutralize or disable all other samplers ## prompting use the llama 3 instruct format `<|eot_id|>` as stopping sequence/string/token ST jsons: [instruct](https://files.catbox.moe/ocnjb7.json) [context](https://files.catbox.moe/hjkawf.json) agnaistic prompt: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|>{{#if system}}<|begin_of_text|><|start_header_id|>system<|end_header_id|>{{system}}<|eot_id|>{{/if}}Write {{char}}'s next reply in a fictional roleplay chat between {{#each bot}}{{.name}}, {{/each}}{{char}} and {{user}}. {{char}}'s Persona: {{personality}} {{#if memory}} Important details: {{memory}} {{/if}} {{#if example_dialogue}}This is how {{char}} should talk: {{example_dialogue}}{{/if}} This scenario of the conversation: {{scenario}} Then the roleplay chat between {{#each bot}}{{.name}}, {{/each}}{{char}} and {{user}} begins.<|eot_id|> {{#each msg}}{{#if .isbot}}<|start_header_id|>response<|end_header_id|>{{/if}}{{#if .isuser}}<|start_header_id|>user<|end_header_id|>{{/if}}{{.name}}: {{.msg}}<|eot_id|> {{/each}} {{#if ujb}}<|begin_of_text|><|start_header_id|>system<|end_header_id|>{{ujb}}<|eot_id|>{{/if}} <|start_header_id|>response<|end_header_id|>{{post}} ```
{"datasets": ["PygmalionAI/PIPPA", "lemonilia/LimaRP"]}
lucyknada/ludis_tsukasa-llama-3-70b-qlora-4.5bpw-EXL2
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:PygmalionAI/PIPPA", "dataset:lemonilia/LimaRP", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T14:48:44+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #dataset-PygmalionAI/PIPPA #dataset-lemonilia/LimaRP #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
big thanks to lore for the 8xH100 gpus ## training base model is meta llama 3 8b instruct trained on pippa then i trained that model on limarp, both at 8k context for 2 epochs each ## gen settings i would start with every sampler off and temperature at 1 and just make min p 0.05, i got good prompts from this but u can also try to gen settings from shori which are copy pasted below - Main choice (may have repetition issues) - Temperature: 1.0; Min-P: 0.05-0.10; Presence Penalty: 0.35-0.45 - Alternative 1 (appears to solve repetition issues while being coherent, but reponses might possibly be less truthful) - Temperature: 2.40-2.50; Min-P: 0.40; Frequency penalty: 0.10-0.15; Temperature last. - Alternative 2 - Mirostat type: 2, Mirostat Tau: 2.80-3.00; Mirostat Eta: 0.0175-0.0200; neutralize or disable all other samplers ## prompting use the llama 3 instruct format '<|eot_id|>' as stopping sequence/string/token ST jsons: instruct context agnaistic prompt:
[ "## training\n\nbase model is meta llama 3 8b instruct\ntrained on pippa then i trained that model on limarp, both at 8k context for 2 epochs each", "## gen settings\n\ni would start with every sampler off and temperature at 1 and just make min p 0.05, i got good prompts from this but u can also try to gen settings from shori which are copy pasted below\n\n- Main choice (may have repetition issues)\n - Temperature: 1.0; Min-P: 0.05-0.10; Presence Penalty: 0.35-0.45 \n- Alternative 1 (appears to solve repetition issues while being coherent, but reponses might possibly be less truthful)\n - Temperature: 2.40-2.50; Min-P: 0.40; Frequency penalty: 0.10-0.15; Temperature last.\n- Alternative 2\n - Mirostat type: 2, Mirostat Tau: 2.80-3.00; Mirostat Eta: 0.0175-0.0200; neutralize or disable all other samplers", "## prompting\n\nuse the llama 3 instruct format\n\n'<|eot_id|>' as stopping sequence/string/token\n\nST jsons:\ninstruct\ncontext\n\nagnaistic prompt:" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #dataset-PygmalionAI/PIPPA #dataset-lemonilia/LimaRP #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## training\n\nbase model is meta llama 3 8b instruct\ntrained on pippa then i trained that model on limarp, both at 8k context for 2 epochs each", "## gen settings\n\ni would start with every sampler off and temperature at 1 and just make min p 0.05, i got good prompts from this but u can also try to gen settings from shori which are copy pasted below\n\n- Main choice (may have repetition issues)\n - Temperature: 1.0; Min-P: 0.05-0.10; Presence Penalty: 0.35-0.45 \n- Alternative 1 (appears to solve repetition issues while being coherent, but reponses might possibly be less truthful)\n - Temperature: 2.40-2.50; Min-P: 0.40; Frequency penalty: 0.10-0.15; Temperature last.\n- Alternative 2\n - Mirostat type: 2, Mirostat Tau: 2.80-3.00; Mirostat Eta: 0.0175-0.0200; neutralize or disable all other samplers", "## prompting\n\nuse the llama 3 instruct format\n\n'<|eot_id|>' as stopping sequence/string/token\n\nST jsons:\ninstruct\ncontext\n\nagnaistic prompt:" ]
null
transformers
# Medical-Llama3-8B-GGUF [![](future.jpg)](https://ruslanmv.com/) This is a fine-tuned version of the Llama3 8B model, specifically designed to answer medical questions. The model was trained on the AI Medical Chatbot dataset, which can be found at [ruslanmv/ai-medical-chatbot](https://huggingface.co/datasets/ruslanmv/ai-medical-chatbot). This fine-tuned model leverages the GGUF (General-Purpose Gradient-based Quantization with Uniform Forwarding) technique for efficient inference with 4-bit quantization. **Model:** [ruslanmv/Medical-Llama3-8B-GGUF](https://huggingface.co/ruslanmv/Medical-Llama3-8B-GGUF) - **Developed by:** ruslanmv - **License:** apache-2.0 - **Finetuned from model:** meta-llama/Meta-Llama-3-8B ## Installation **Prerequisites:** - A system with CUDA support is highly recommended for optimal performance. - Python 3.10 or later 1. **Install required Python libraries:** ```bash # GPU llama-cpp-python !CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir --verbose ``` ```bash %%capture !pip install huggingface-hub hf-transfer ``` 2. **Download model quantized:** ```bash import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" !huggingface-cli download \ ruslanmv/Medical-Llama3-8B-GGUF \ medical-llama3-8b.Q5_K_M.gguf \ --local-dir . \ --local-dir-use-symlinks False MODEL_PATH="/content/medical-llama3-8b.Q5_K_M.gguf" ``` ## Example of use Here's an example of how to use the Medical-Llama3-8B-GGUF 4bit model to generate an answer to a medical question: ```python from llama_cpp import Llama import json B_INST, E_INST = "<s>[INST]", "[/INST]" B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n" DEFAULT_SYSTEM_PROMPT = """\ You are an AI Medical Chatbot Assistant, I'm equipped with a wealth of medical knowledge derived from extensive datasets. I aim to provide comprehensive and informative responses to your inquiries. However, please note that while I strive for accuracy, my responses should not replace professional medical advice and short answers. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.""" SYSTEM_PROMPT = B_SYS + DEFAULT_SYSTEM_PROMPT + E_SYS def create_prompt(user_query): instruction = f"User asks: {user_query}\n" prompt = B_INST + SYSTEM_PROMPT + instruction + E_INST return prompt.strip() user_query = "I'm a 35-year-old male experiencing symptoms like fatigue, increased sensitivity to cold, and dry, itchy skin. Could these be indicative of hypothyroidism?" prompt = create_prompt(user_query) print(prompt) llm = Llama(model_path=MODEL_PATH, n_gpu_layers=-1) result = llm( prompt=prompt, max_tokens=100, echo=False ) print(result['choices'][0]['text']) ``` The output exmample ```bash Hi, thank you for your query. Hypothyroidism is characterized by fatigue, sensitivity to cold, weight gain, depression, hair loss and mental dullness. I would suggest that you get a complete blood count with thyroid profile including TSH (thyroid stimulating hormone), free thyroxine level, and anti-thyroglobulin antibodies. These tests will help in establishing the diagnosis of hypothyroidism. If there is no family history of autoimmune disorders, then it might be due ``` ## License This model is licensed under the Apache License 2.0. You can find the full license in the LICENSE file.
{"language": "en", "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "ruslanmv", "llama", "trl"], "datasets": ["ruslanmv/ai-medical-chatbot"], "base_model": "meta-llama/Meta-Llama-3-8B"}
ruslanmv/Medical-Llama3-8B-GGUF
null
[ "transformers", "gguf", "text-generation-inference", "ruslanmv", "llama", "trl", "en", "dataset:ruslanmv/ai-medical-chatbot", "base_model:meta-llama/Meta-Llama-3-8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-23T14:51:07+00:00
[]
[ "en" ]
TAGS #transformers #gguf #text-generation-inference #ruslanmv #llama #trl #en #dataset-ruslanmv/ai-medical-chatbot #base_model-meta-llama/Meta-Llama-3-8B #license-apache-2.0 #endpoints_compatible #region-us
# Medical-Llama3-8B-GGUF ![](URL This is a fine-tuned version of the Llama3 8B model, specifically designed to answer medical questions. The model was trained on the AI Medical Chatbot dataset, which can be found at ruslanmv/ai-medical-chatbot. This fine-tuned model leverages the GGUF (General-Purpose Gradient-based Quantization with Uniform Forwarding) technique for efficient inference with 4-bit quantization. Model: ruslanmv/Medical-Llama3-8B-GGUF - Developed by: ruslanmv - License: apache-2.0 - Finetuned from model: meta-llama/Meta-Llama-3-8B ## Installation Prerequisites: - A system with CUDA support is highly recommended for optimal performance. - Python 3.10 or later 1. Install required Python libraries: 2. Download model quantized: ## Example of use Here's an example of how to use the Medical-Llama3-8B-GGUF 4bit model to generate an answer to a medical question: The output exmample ## License This model is licensed under the Apache License 2.0. You can find the full license in the LICENSE file.
[ "# Medical-Llama3-8B-GGUF\n![](URL\nThis is a fine-tuned version of the Llama3 8B model, specifically designed to answer medical questions. \nThe model was trained on the AI Medical Chatbot dataset, which can be found at ruslanmv/ai-medical-chatbot. This fine-tuned model leverages the GGUF (General-Purpose Gradient-based Quantization with Uniform Forwarding) technique for efficient inference with 4-bit quantization.\n\nModel: ruslanmv/Medical-Llama3-8B-GGUF\n\n- Developed by: ruslanmv\n- License: apache-2.0\n- Finetuned from model: meta-llama/Meta-Llama-3-8B", "## Installation\n\nPrerequisites:\n\n- A system with CUDA support is highly recommended for optimal performance.\n- Python 3.10 or later\n\n\n1. Install required Python libraries:\n\n\n \n\n \n\n2. Download model quantized:", "## Example of use\n\nHere's an example of how to use the Medical-Llama3-8B-GGUF 4bit model to generate an answer to a medical question:\n\n \n\nThe output exmample", "## License\n\nThis model is licensed under the Apache License 2.0. You can find the full license in the LICENSE file." ]
[ "TAGS\n#transformers #gguf #text-generation-inference #ruslanmv #llama #trl #en #dataset-ruslanmv/ai-medical-chatbot #base_model-meta-llama/Meta-Llama-3-8B #license-apache-2.0 #endpoints_compatible #region-us \n", "# Medical-Llama3-8B-GGUF\n![](URL\nThis is a fine-tuned version of the Llama3 8B model, specifically designed to answer medical questions. \nThe model was trained on the AI Medical Chatbot dataset, which can be found at ruslanmv/ai-medical-chatbot. This fine-tuned model leverages the GGUF (General-Purpose Gradient-based Quantization with Uniform Forwarding) technique for efficient inference with 4-bit quantization.\n\nModel: ruslanmv/Medical-Llama3-8B-GGUF\n\n- Developed by: ruslanmv\n- License: apache-2.0\n- Finetuned from model: meta-llama/Meta-Llama-3-8B", "## Installation\n\nPrerequisites:\n\n- A system with CUDA support is highly recommended for optimal performance.\n- Python 3.10 or later\n\n\n1. Install required Python libraries:\n\n\n \n\n \n\n2. Download model quantized:", "## Example of use\n\nHere's an example of how to use the Medical-Llama3-8B-GGUF 4bit model to generate an answer to a medical question:\n\n \n\nThe output exmample", "## License\n\nThis model is licensed under the Apache License 2.0. You can find the full license in the LICENSE file." ]
null
transformers
# Uploaded model - **Developed by:** Thanabordee - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
Thanabordee/Llama-3-Han-TH-gguf
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-23T14:52:04+00:00
[]
[ "en" ]
TAGS #transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: Thanabordee - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: Thanabordee\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: Thanabordee\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
# merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: bfloat16 merge_method: passthrough slices: - sources: - layer_range: [0, 22] model: model: path: meta-llama/Meta-Llama-3-8B - sources: - layer_range: [30, 32] model: model: path: meta-llama/Meta-Llama-3-8B ```
{"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["meta-llama/Meta-Llama-3-8B"]}
ChuGyouk/Llama-3-6.3B-no-healing
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "en", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T14:52:15+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #mergekit #merge #en #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merged This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * meta-llama/Meta-Llama-3-8B ### Configuration The following YAML configuration was used to produce this model:
[ "# merged\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* meta-llama/Meta-Llama-3-8B", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #en #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merged\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* meta-llama/Meta-Llama-3-8B", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: amazingT/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
amazingT/ppo-Huggy
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-04-23T14:52:37+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
# ppo Agent playing Huggy This is a trained model of a ppo agent playing Huggy using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: amazingT/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: amazingT/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n", "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: amazingT/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # contrast_classifier_biobert_v3 This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2394 - Accuracy: 0.9556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.045 | 1.0 | 622 | 0.2081 | 0.9524 | | 0.0009 | 2.0 | 1244 | 0.2234 | 0.9522 | | 0.001 | 3.0 | 1866 | 0.2394 | 0.9556 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "dmis-lab/biobert-v1.1", "model-index": [{"name": "contrast_classifier_biobert_v3", "results": []}]}
Granoladata/contrast_classifier_biobert_v3
null
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:dmis-lab/biobert-v1.1", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-23T14:52:44+00:00
[]
[]
TAGS #transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-dmis-lab/biobert-v1.1 #autotrain_compatible #endpoints_compatible #region-us
contrast\_classifier\_biobert\_v3 ================================= This model is a fine-tuned version of dmis-lab/biobert-v1.1 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.2394 * Accuracy: 0.9556 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.2+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-dmis-lab/biobert-v1.1 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
reinforcement-learning
stable-baselines3
# **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "A2C", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "PandaReachDense-v3", "type": "PandaReachDense-v3"}, "metrics": [{"type": "mean_reward", "value": "-0.24 +/- 0.08", "name": "mean_reward", "verified": false}]}]}]}
SparkleDark/a2c-PandaReachDense-v3
null
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-23T14:52:55+00:00
[]
[]
TAGS #stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# A2C Agent playing PandaReachDense-v3 This is a trained model of a A2C agent playing PandaReachDense-v3 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
null
null
# pfnet-nekomata-14b-pfn-qfin-inst-merge-gguf [pfnetさんが公開しているnekomata-14b-pfn-qfin-inst-merge](https://huggingface.co/pfnet/nekomata-14b-pfn-qfin-inst-merge)のggufフォーマット変換版です。 imatrixのデータはTFMC/imatrix-dataset-for-japanese-llmを使用して作成しました。 ## ライセンス tongyi-qianwenライセンスになります。 [ご使用前にライセンスをご確認ください。](https://huggingface.co/pfnet/nekomata-14b-pfn-qfin-inst-merge/blob/main/LICENSE) ## 他のモデル [mmnga/pfnet-nekomata-14b-pfn-qfin-gguf](https://huggingface.co/mmnga/pfnet-nekomata-14b-pfn-qfin-gguf) [mmnga/pfnet-nekomata-14b-pfn-qfin-inst-merge-gguf](https://huggingface.co/mmnga/pfnet-nekomata-14b-pfn-qfin-inst-merge-gguf) ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'pfnet-nekomata-14b-pfn-qfin-inst-merge-q4_0.gguf' -n 128 --temp 0.5 -p '### 指示:次の日本語を英語に翻訳してください。\n\n### 入力: 大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテキストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。 \n\n### 応答:' ```
{"language": ["en", "ja"], "license": "other", "tags": ["qwen"], "datasets": ["TFMC/imatrix-dataset-for-japanese-llm"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/pfnet/nekomata-14b-pfn-qfin-inst-merge/blob/main/LICENSE"}
mmnga/pfnet-nekomata-14b-pfn-qfin-inst-merge-gguf
null
[ "gguf", "qwen", "en", "ja", "dataset:TFMC/imatrix-dataset-for-japanese-llm", "license:other", "region:us" ]
null
2024-04-23T14:53:08+00:00
[]
[ "en", "ja" ]
TAGS #gguf #qwen #en #ja #dataset-TFMC/imatrix-dataset-for-japanese-llm #license-other #region-us
# pfnet-nekomata-14b-pfn-qfin-inst-merge-gguf pfnetさんが公開しているnekomata-14b-pfn-qfin-inst-mergeのggufフォーマット変換版です。 imatrixのデータはTFMC/imatrix-dataset-for-japanese-llmを使用して作成しました。 ## ライセンス tongyi-qianwenライセンスになります。 ご使用前にライセンスをご確認ください。 ## 他のモデル mmnga/pfnet-nekomata-14b-pfn-qfin-gguf mmnga/pfnet-nekomata-14b-pfn-qfin-inst-merge-gguf ## Usage
[ "# pfnet-nekomata-14b-pfn-qfin-inst-merge-gguf \npfnetさんが公開しているnekomata-14b-pfn-qfin-inst-mergeのggufフォーマット変換版です。\n\nimatrixのデータはTFMC/imatrix-dataset-for-japanese-llmを使用して作成しました。", "## ライセンス\ntongyi-qianwenライセンスになります。 \nご使用前にライセンスをご確認ください。", "## 他のモデル\nmmnga/pfnet-nekomata-14b-pfn-qfin-gguf \nmmnga/pfnet-nekomata-14b-pfn-qfin-inst-merge-gguf", "## Usage" ]
[ "TAGS\n#gguf #qwen #en #ja #dataset-TFMC/imatrix-dataset-for-japanese-llm #license-other #region-us \n", "# pfnet-nekomata-14b-pfn-qfin-inst-merge-gguf \npfnetさんが公開しているnekomata-14b-pfn-qfin-inst-mergeのggufフォーマット変換版です。\n\nimatrixのデータはTFMC/imatrix-dataset-for-japanese-llmを使用して作成しました。", "## ライセンス\ntongyi-qianwenライセンスになります。 \nご使用前にライセンスをご確認ください。", "## 他のモデル\nmmnga/pfnet-nekomata-14b-pfn-qfin-gguf \nmmnga/pfnet-nekomata-14b-pfn-qfin-inst-merge-gguf", "## Usage" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Guilherme34/Samantha-v5-wizardlm2
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T14:53:12+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
sentence-similarity
sentence-transformers
# ai-maker-space/snowflake-ft This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ai-maker-space/snowflake-ft') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ai-maker-space/snowflake-ft) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 50, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 2, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
ai-maker-space/snowflake-ft
null
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "endpoints_compatible", "region:us" ]
null
2024-04-23T14:53:25+00:00
[]
[]
TAGS #sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #endpoints_compatible #region-us
# ai-maker-space/snowflake-ft This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL ## Training The model was trained with the parameters: DataLoader: 'URL.dataloader.DataLoader' of length 11 with parameters: Loss: 'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters: Parameters of the fit()-Method: ## Full Model Architecture ## Citing & Authors
[ "# ai-maker-space/snowflake-ft\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 11 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n", "# ai-maker-space/snowflake-ft\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 11 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
text-generation
null
## Llamacpp imatrix Quantizations of Meta-Llama-3-70B-Instruct Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2714">b2714</a> for quantization. Original model: https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## Prompt format ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Meta-Llama-3-70B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/tree/main/Meta-Llama-3-70B-Instruct-Q8_0.gguf) | Q8_0 | 74.97GB | Extremely high quality, generally unneeded but max available quant. | | [Meta-Llama-3-70B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/tree/main/Meta-Llama-3-70B-Instruct-Q6_K.gguf) | Q6_K | 57.88GB | Very high quality, near perfect, *recommended*. | | [Meta-Llama-3-70B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q5_K_M.gguf) | Q5_K_M | 49.94GB | High quality, *recommended*. | | [Meta-Llama-3-70B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q5_K_S.gguf) | Q5_K_S | 48.65GB | High quality, *recommended*. | | [Meta-Llama-3-70B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q4_K_M.gguf) | Q4_K_M | 42.52GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Meta-Llama-3-70B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q4_K_S.gguf) | Q4_K_S | 40.34GB | Slightly lower quality with more space savings, *recommended*. | | [Meta-Llama-3-70B-Instruct-IQ4_NL.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ4_NL.gguf) | IQ4_NL | 40.05GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [Meta-Llama-3-70B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ4_XS.gguf) | IQ4_XS | 37.90GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Meta-Llama-3-70B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q3_K_L.gguf) | Q3_K_L | 37.14GB | Lower quality but usable, good for low RAM availability. | | [Meta-Llama-3-70B-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q3_K_M.gguf) | Q3_K_M | 34.26GB | Even lower quality. | | [Meta-Llama-3-70B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ3_M.gguf) | IQ3_M | 31.93GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Meta-Llama-3-70B-Instruct-IQ3_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ3_S.gguf) | IQ3_S | 30.91GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [Meta-Llama-3-70B-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q3_K_S.gguf) | Q3_K_S | 30.91GB | Low quality, not recommended. | | [Meta-Llama-3-70B-Instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ3_XS.gguf) | IQ3_XS | 29.30GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Meta-Llama-3-70B-Instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ3_XXS.gguf) | IQ3_XXS | 27.46GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Meta-Llama-3-70B-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q2_K.gguf) | Q2_K | 26.37GB | Very low quality but surprisingly usable. | | [Meta-Llama-3-70B-Instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ2_M.gguf) | IQ2_M | 24.11GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Meta-Llama-3-70B-Instruct-IQ2_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ2_S.gguf) | IQ2_S | 22.24GB | Very low quality, uses SOTA techniques to be usable. | | [Meta-Llama-3-70B-Instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ2_XS.gguf) | IQ2_XS | 21.14GB | Very low quality, uses SOTA techniques to be usable. | | [Meta-Llama-3-70B-Instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ2_XXS.gguf) | IQ2_XXS | 19.09GB | Lower quality, uses SOTA techniques to be usable. | | [Meta-Llama-3-70B-Instruct-IQ1_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ1_M.gguf) | IQ1_M | 16.75GB | Extremely low quality, *not* recommended. | | [Meta-Llama-3-70B-Instruct-IQ1_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ1_S.gguf) | IQ1_S | 15.34GB | Extremely low quality, *not* recommended. | ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"language": ["en"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit", "widget": [{"example_title": "Winter holidays", "messages": [{"role": "system", "content": "You are a helpful and honest assistant. Please, respond concisely and truthfully."}, {"role": "user", "content": "Can you recommend a good destination for Winter holidays?"}]}, {"example_title": "Programming assistant", "messages": [{"role": "system", "content": "You are a helpful and honest code and programming assistant. Please, respond concisely and truthfully."}, {"role": "user", "content": "Write a function that computes the nth fibonacci number."}]}], "inference": {"parameters": {"max_new_tokens": 300, "stop": ["<|end_of_text|>", "<|eot_id|>"]}}, "quantized_by": "bartowski"}
bartowski/Meta-Llama-3-70B-Instruct-old-GGUF
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "text-generation", "en", "license:other", "region:us" ]
null
2024-04-23T14:54:51+00:00
[]
[ "en" ]
TAGS #gguf #facebook #meta #pytorch #llama #llama-3 #text-generation #en #license-other #region-us
Llamacpp imatrix Quantizations of Meta-Llama-3-70B-Instruct ----------------------------------------------------------- Using <a href="URL release <a href="URL for quantization. Original model: URL All quants made using imatrix option with dataset provided by Kalomaze here Prompt format ------------- Download a file (not the whole branch) from below: -------------------------------------------------- Which file should I choose? --------------------------- A great write up with charts showing various performances is provided by Artefact2 here The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX\_K\_X', like Q5\_K\_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: URL feature matrix But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX\_X, like IQ3\_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: URL
[]
[ "TAGS\n#gguf #facebook #meta #pytorch #llama #llama-3 #text-generation #en #license-other #region-us \n" ]
null
null
# LewdPlay-8B May 1st 2024: GGUF have been fixed with [this PR of llama.cpp](https://github.com/ggerganov/llama.cpp/pull/6920) <!-- description start --> ## Description This repo contains GGUF files of LewdPlay Llama3, a finetuned model on multiple RP datasets based on [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). <!-- description end --> <!-- description start --> ## Dataset used - Undi95/toxic-dpo-v0.1 - NobodyExistsOnTheInternet/ToxicQAFinal - Aesir [2] & [3 - SFW / 3 - NSFW] - cgato/SlimOrcaDedupCleaned (reduced) - Undi95/Capybara-ShareGPT (reduced) - Airobors (reduced) - Pippa (cleaned/reduced) - Bluemoon (cleaned/reduced) - LimaRP (8k ctx) <!-- description end --> <!-- prompt-template start --> ## Prompt template: Llama3 ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ``` SillyTavern files of Llama3 prompt format (if you still don't have them) : [Context](https://files.catbox.moe/hjkawf.json) - [Instruct](https://files.catbox.moe/2liomr.json) ## Usage Work best with character card well written, with some exemple message, memory, etc... ## Support If you want to support me, you can [here](https://ko-fi.com/undiai).
{"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences", "nsfw"]}
Undi95/Llama-3-LewdPlay-8B-GGUF
null
[ "gguf", "not-for-all-audiences", "nsfw", "license:cc-by-nc-4.0", "region:us" ]
null
2024-04-23T14:55:09+00:00
[]
[]
TAGS #gguf #not-for-all-audiences #nsfw #license-cc-by-nc-4.0 #region-us
# LewdPlay-8B May 1st 2024: GGUF have been fixed with this PR of URL ## Description This repo contains GGUF files of LewdPlay Llama3, a finetuned model on multiple RP datasets based on meta-llama/Meta-Llama-3-8B-Instruct. ## Dataset used - Undi95/toxic-dpo-v0.1 - NobodyExistsOnTheInternet/ToxicQAFinal - Aesir [2] & [3 - SFW / 3 - NSFW] - cgato/SlimOrcaDedupCleaned (reduced) - Undi95/Capybara-ShareGPT (reduced) - Airobors (reduced) - Pippa (cleaned/reduced) - Bluemoon (cleaned/reduced) - LimaRP (8k ctx) ## Prompt template: Llama3 SillyTavern files of Llama3 prompt format (if you still don't have them) : Context - Instruct ## Usage Work best with character card well written, with some exemple message, memory, etc... ## Support If you want to support me, you can here.
[ "# LewdPlay-8B\n\nMay 1st 2024: GGUF have been fixed with this PR of URL", "## Description\n\nThis repo contains GGUF files of LewdPlay Llama3, a finetuned model on multiple RP datasets based on meta-llama/Meta-Llama-3-8B-Instruct.", "## Dataset used\n\n- Undi95/toxic-dpo-v0.1\n- NobodyExistsOnTheInternet/ToxicQAFinal\n- Aesir [2] & [3 - SFW / 3 - NSFW]\n- cgato/SlimOrcaDedupCleaned (reduced)\n- Undi95/Capybara-ShareGPT (reduced)\n- Airobors (reduced)\n- Pippa (cleaned/reduced)\n- Bluemoon (cleaned/reduced)\n- LimaRP (8k ctx)", "## Prompt template: Llama3\n\n\n\nSillyTavern files of Llama3 prompt format (if you still don't have them) : Context - Instruct", "## Usage\n\nWork best with character card well written, with some exemple message, memory, etc...", "## Support\n\nIf you want to support me, you can here." ]
[ "TAGS\n#gguf #not-for-all-audiences #nsfw #license-cc-by-nc-4.0 #region-us \n", "# LewdPlay-8B\n\nMay 1st 2024: GGUF have been fixed with this PR of URL", "## Description\n\nThis repo contains GGUF files of LewdPlay Llama3, a finetuned model on multiple RP datasets based on meta-llama/Meta-Llama-3-8B-Instruct.", "## Dataset used\n\n- Undi95/toxic-dpo-v0.1\n- NobodyExistsOnTheInternet/ToxicQAFinal\n- Aesir [2] & [3 - SFW / 3 - NSFW]\n- cgato/SlimOrcaDedupCleaned (reduced)\n- Undi95/Capybara-ShareGPT (reduced)\n- Airobors (reduced)\n- Pippa (cleaned/reduced)\n- Bluemoon (cleaned/reduced)\n- LimaRP (8k ctx)", "## Prompt template: Llama3\n\n\n\nSillyTavern files of Llama3 prompt format (if you still don't have them) : Context - Instruct", "## Usage\n\nWork best with character card well written, with some exemple message, memory, etc...", "## Support\n\nIf you want to support me, you can here." ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [Enkhmunkh Orgil] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
peace4ever/roberta-large-finetuned-mongolian_v1
null
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-23T14:55:33+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #xlm-roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: [Enkhmunkh Orgil] - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: [Enkhmunkh Orgil]\n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: [Enkhmunkh Orgil]\n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/adapter-noleakidentity-200-50
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-23T14:55:41+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# output This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: cognitivecomputations/dolphin-2.9-llama3-8b layer_range: [0, 15] - sources: - model: cognitivecomputations/dolphin-2.9-llama3-8b layer_range: [10, 20] - sources: - model: cognitivecomputations/dolphin-2.9-llama3-8b layer_range: [15, 25] - sources: - model: cognitivecomputations/dolphin-2.9-llama3-8b layer_range: [20, 31] merge_method: passthrough dtype: float16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["cognitivecomputations/dolphin-2.9-llama3-8b"]}
Gustav0-Freind/nm
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:cognitivecomputations/dolphin-2.9-llama3-8b", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T14:55:55+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-cognitivecomputations/dolphin-2.9-llama3-8b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# output This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * cognitivecomputations/dolphin-2.9-llama3-8b ### Configuration The following YAML configuration was used to produce this model:
[ "# output\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* cognitivecomputations/dolphin-2.9-llama3-8b", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-cognitivecomputations/dolphin-2.9-llama3-8b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# output\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* cognitivecomputations/dolphin-2.9-llama3-8b", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
null
This was a private test qlora some goon on 4chan wants hands on. This is a mistake. Its outright hostile btw, I went WAY too far correcting this. I thought you were stronger. Regardless, for what its worth, here it is. This requires you get your shit together and format for llama3 correctly. If you're retarded its your own fault. Link to model https://huggingface.co/astronomer/Llama-3-8B-Instruct-GPTQ-4-Bit
{"license": "mit"}
qq67878980/LLama3UncensorTest1
null
[ "safetensors", "license:mit", "region:us" ]
null
2024-04-23T14:56:21+00:00
[]
[]
TAGS #safetensors #license-mit #region-us
This was a private test qlora some goon on 4chan wants hands on. This is a mistake. Its outright hostile btw, I went WAY too far correcting this. I thought you were stronger. Regardless, for what its worth, here it is. This requires you get your shit together and format for llama3 correctly. If you're retarded its your own fault. Link to model URL
[]
[ "TAGS\n#safetensors #license-mit #region-us \n" ]
text-to-image
diffusers
# API Inference ![generated from modelslab.com](https://cdn2.stablediffusionapi.com/generations/bf190b5a-fe19-437c-ba05-82f29cb1f7ad-0.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "realcartoon-anime-v11" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/realcartoon-anime-v11) Model link: [View model](https://modelslab.com/models/realcartoon-anime-v11) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "realcartoon-anime-v11", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
{"license": "creativeml-openrail-m", "tags": ["modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic"], "pinned": true}
stablediffusionapi/realcartoon-anime-v11
null
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-23T14:56:35+00:00
[]
[]
TAGS #diffusers #modelslab.com #stable-diffusion-api #text-to-image #ultra-realistic #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
# API Inference !generated from URL ## Get API Key Get API key from ModelsLab API, No Payment needed. Replace Key in below code, change model_id to "realcartoon-anime-v11" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs Try model for free: Generate Images Model link: View model View all models: View Models import requests import json url = "URL payload = URL({ "key": "your_api_key", "model_id": "realcartoon-anime-v11", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(URL) > Use this coupon code to get 25% off DMGG0RBN
[ "# API Inference\n\n!generated from URL", "## Get API Key\n\nGet API key from ModelsLab API, No Payment needed. \n\nReplace Key in below code, change model_id to \"realcartoon-anime-v11\"\n\nCoding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs\n\nTry model for free: Generate Images\n\nModel link: View model\n\nView all models: View Models\n\n import requests \n import json \n \n url = \"URL \n \n payload = URL({ \n \"key\": \"your_api_key\", \n \"model_id\": \"realcartoon-anime-v11\", \n \"prompt\": \"ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K\", \n \"negative_prompt\": \"painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime\", \n \"width\": \"512\", \n \"height\": \"512\", \n \"samples\": \"1\", \n \"num_inference_steps\": \"30\", \n \"safety_checker\": \"no\", \n \"enhance_prompt\": \"yes\", \n \"seed\": None, \n \"guidance_scale\": 7.5, \n \"multi_lingual\": \"no\", \n \"panorama\": \"no\", \n \"self_attention\": \"no\", \n \"upscale\": \"no\", \n \"embeddings\": \"embeddings_model_id\", \n \"lora\": \"lora_model_id\", \n \"webhook\": None, \n \"track_id\": None \n }) \n \n headers = { \n 'Content-Type': 'application/json' \n } \n \n response = requests.request(\"POST\", url, headers=headers, data=payload) \n \n print(URL)\n\n> Use this coupon code to get 25% off DMGG0RBN" ]
[ "TAGS\n#diffusers #modelslab.com #stable-diffusion-api #text-to-image #ultra-realistic #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "# API Inference\n\n!generated from URL", "## Get API Key\n\nGet API key from ModelsLab API, No Payment needed. \n\nReplace Key in below code, change model_id to \"realcartoon-anime-v11\"\n\nCoding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs\n\nTry model for free: Generate Images\n\nModel link: View model\n\nView all models: View Models\n\n import requests \n import json \n \n url = \"URL \n \n payload = URL({ \n \"key\": \"your_api_key\", \n \"model_id\": \"realcartoon-anime-v11\", \n \"prompt\": \"ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K\", \n \"negative_prompt\": \"painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime\", \n \"width\": \"512\", \n \"height\": \"512\", \n \"samples\": \"1\", \n \"num_inference_steps\": \"30\", \n \"safety_checker\": \"no\", \n \"enhance_prompt\": \"yes\", \n \"seed\": None, \n \"guidance_scale\": 7.5, \n \"multi_lingual\": \"no\", \n \"panorama\": \"no\", \n \"self_attention\": \"no\", \n \"upscale\": \"no\", \n \"embeddings\": \"embeddings_model_id\", \n \"lora\": \"lora_model_id\", \n \"webhook\": None, \n \"track_id\": None \n }) \n \n headers = { \n 'Content-Type': 'application/json' \n } \n \n response = requests.request(\"POST\", url, headers=headers, data=payload) \n \n print(URL)\n\n> Use this coupon code to get 25% off DMGG0RBN" ]
multiple-choice
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NLPGroupProject-Finetune-Funnel-Transformer This model is a fine-tuned version of [funnel-transformer/intermediate-base](https://huggingface.co/funnel-transformer/intermediate-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3863 - Accuracy: 0.263 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4073 | 0.25 | 500 | 1.3863 | 0.262 | | 1.403 | 0.5 | 1000 | 1.3863 | 0.275 | | 1.4031 | 0.75 | 1500 | 1.3863 | 0.263 | | 1.4035 | 1.0 | 2000 | 1.3863 | 0.259 | | 1.3984 | 1.25 | 2500 | 1.3863 | 0.283 | | 1.3904 | 1.5 | 3000 | 1.3863 | 0.263 | | 1.3977 | 1.75 | 3500 | 1.3863 | 0.252 | | 1.3949 | 2.0 | 4000 | 1.3863 | 0.272 | | 1.3979 | 2.25 | 4500 | 1.3863 | 0.258 | | 1.3965 | 2.5 | 5000 | 1.3863 | 0.225 | | 1.3944 | 2.75 | 5500 | 1.3863 | 0.246 | | 1.3999 | 3.0 | 6000 | 1.3863 | 0.263 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "funnel-transformer/intermediate-base", "model-index": [{"name": "NLPGroupProject-Finetune-Funnel-Transformer", "results": []}]}
BenjaminTT/NLPGroupProject-Finetune-Funnel-Transformer
null
[ "transformers", "safetensors", "funnel", "multiple-choice", "generated_from_trainer", "base_model:funnel-transformer/intermediate-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-23T14:57:16+00:00
[]
[]
TAGS #transformers #safetensors #funnel #multiple-choice #generated_from_trainer #base_model-funnel-transformer/intermediate-base #license-apache-2.0 #endpoints_compatible #region-us
NLPGroupProject-Finetune-Funnel-Transformer =========================================== This model is a fine-tuned version of funnel-transformer/intermediate-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.3863 * Accuracy: 0.263 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.2+cu118 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #funnel #multiple-choice #generated_from_trainer #base_model-funnel-transformer/intermediate-base #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Classifier_with_external_sets_05 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2840 - Accuracy: 0.9627 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:-----:|:---------------:|:--------:| | No log | 0.9983 | 289 | 0.6833 | 0.7547 | | 0.403 | 2.0 | 579 | 0.4286 | 0.7700 | | 0.403 | 2.9983 | 868 | 0.5718 | 0.8196 | | 0.1978 | 4.0 | 1158 | 0.3336 | 0.8813 | | 0.1978 | 4.9983 | 1447 | 0.3455 | 0.8795 | | 0.1523 | 6.0 | 1737 | 0.5141 | 0.8398 | | 0.1371 | 6.9983 | 2026 | 0.2422 | 0.9291 | | 0.1371 | 8.0 | 2316 | 0.1653 | 0.9486 | | 0.1073 | 8.9983 | 2605 | 0.1606 | 0.9480 | | 0.1073 | 10.0 | 2895 | 0.3522 | 0.8991 | | 0.0966 | 10.9983 | 3184 | 0.2096 | 0.9309 | | 0.0966 | 12.0 | 3474 | 0.1263 | 0.9664 | | 0.0887 | 12.9983 | 3763 | 0.2030 | 0.9529 | | 0.0935 | 14.0 | 4053 | 0.1045 | 0.9676 | | 0.0935 | 14.9983 | 4342 | 0.1270 | 0.9664 | | 0.0751 | 16.0 | 4632 | 0.1873 | 0.9596 | | 0.0751 | 16.9983 | 4921 | 0.2181 | 0.9621 | | 0.0644 | 18.0 | 5211 | 0.1207 | 0.9713 | | 0.0589 | 18.9983 | 5500 | 0.3134 | 0.9315 | | 0.0589 | 20.0 | 5790 | 0.2447 | 0.9505 | | 0.0451 | 20.9983 | 6079 | 0.2650 | 0.9474 | | 0.0451 | 22.0 | 6369 | 0.2205 | 0.9596 | | 0.0414 | 22.9983 | 6658 | 0.1899 | 0.9657 | | 0.0414 | 24.0 | 6948 | 0.2518 | 0.9590 | | 0.0415 | 24.9983 | 7237 | 0.2175 | 0.9572 | | 0.0358 | 26.0 | 7527 | 0.3080 | 0.9462 | | 0.0358 | 26.9983 | 7816 | 0.2570 | 0.9474 | | 0.0332 | 28.0 | 8106 | 0.2519 | 0.9554 | | 0.0332 | 28.9983 | 8395 | 0.3117 | 0.9492 | | 0.028 | 30.0 | 8685 | 0.3270 | 0.9517 | | 0.028 | 30.9983 | 8974 | 0.2641 | 0.9602 | | 0.0281 | 32.0 | 9264 | 0.2669 | 0.9615 | | 0.0227 | 32.9983 | 9553 | 0.2558 | 0.9615 | | 0.0227 | 34.0 | 9843 | 0.3255 | 0.9505 | | 0.0218 | 34.9983 | 10132 | 0.3818 | 0.9431 | | 0.0218 | 36.0 | 10422 | 0.2411 | 0.9657 | | 0.0224 | 36.9983 | 10711 | 0.2391 | 0.9645 | | 0.0201 | 38.0 | 11001 | 0.3097 | 0.9602 | | 0.0201 | 38.9983 | 11290 | 0.3057 | 0.9590 | | 0.0168 | 40.0 | 11580 | 0.2537 | 0.9621 | | 0.0168 | 40.9983 | 11869 | 0.2661 | 0.9615 | | 0.0171 | 42.0 | 12159 | 0.3151 | 0.9590 | | 0.0171 | 42.9983 | 12448 | 0.2814 | 0.9621 | | 0.0176 | 44.0 | 12738 | 0.2748 | 0.9633 | | 0.0153 | 44.9983 | 13027 | 0.2950 | 0.9633 | | 0.0153 | 46.0 | 13317 | 0.3171 | 0.9596 | | 0.0133 | 46.9983 | 13606 | 0.2659 | 0.9633 | | 0.0133 | 48.0 | 13896 | 0.3022 | 0.9633 | | 0.0142 | 48.9983 | 14185 | 0.3028 | 0.9609 | | 0.0142 | 49.9136 | 14450 | 0.2840 | 0.9627 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-large", "model-index": [{"name": "Classifier_with_external_sets_05", "results": []}]}
Tensorride/Classifier_with_external_sets_05
null
[ "transformers", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-23T14:57:24+00:00
[]
[]
TAGS #transformers #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v3-large #license-mit #autotrain_compatible #endpoints_compatible #region-us
Classifier\_with\_external\_sets\_05 ==================================== This model is a fine-tuned version of microsoft/deberta-v3-large on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.2840 * Accuracy: 0.9627 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-06 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 50 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.2+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-06\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v3-large #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-06\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
slimaneMakh/tableClassification_23avril-triplet10-peft-lora
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-23T14:58:40+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# Antler-7B-Novel-Writing-GGUF ## 概要 [Aratako/Antler-7B-Novel-Writing](https://huggingface.co/Aratako/Antler-7B-Novel-Writing)の量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。
{"language": ["ja"], "license": "apache-2.0", "tags": ["not-for-all-audiences", "nsfw"], "datasets": ["Aratako/Syosetu711K-Cleaned-158K-Instruct"], "base_model": ["Aratako/Antler-7B-Novel-Writing"]}
Aratako/Antler-7B-Novel-Writing-GGUF
null
[ "gguf", "not-for-all-audiences", "nsfw", "ja", "dataset:Aratako/Syosetu711K-Cleaned-158K-Instruct", "base_model:Aratako/Antler-7B-Novel-Writing", "license:apache-2.0", "region:us" ]
null
2024-04-23T15:00:00+00:00
[]
[ "ja" ]
TAGS #gguf #not-for-all-audiences #nsfw #ja #dataset-Aratako/Syosetu711K-Cleaned-158K-Instruct #base_model-Aratako/Antler-7B-Novel-Writing #license-apache-2.0 #region-us
# Antler-7B-Novel-Writing-GGUF ## 概要 Aratako/Antler-7B-Novel-Writingの量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。
[ "# Antler-7B-Novel-Writing-GGUF", "## 概要\nAratako/Antler-7B-Novel-Writingの量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。" ]
[ "TAGS\n#gguf #not-for-all-audiences #nsfw #ja #dataset-Aratako/Syosetu711K-Cleaned-158K-Instruct #base_model-Aratako/Antler-7B-Novel-Writing #license-apache-2.0 #region-us \n", "# Antler-7B-Novel-Writing-GGUF", "## 概要\nAratako/Antler-7B-Novel-Writingの量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。" ]
text-generation
transformers
# merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: bfloat16 merge_method: passthrough slices: - sources: - layer_range: [0, 24] model: model: path: meta-llama/Meta-Llama-3-8B - sources: - layer_range: [8, 32] model: model: path: meta-llama/Meta-Llama-3-8B ```
{"license": "llama3", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["meta-llama/Meta-Llama-3-8B"]}
ChuGyouk/Llama-3-11.5B-DUS-no-cpt
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T15:01:17+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #mergekit #merge #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merged This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * meta-llama/Meta-Llama-3-8B ### Configuration The following YAML configuration was used to produce this model:
[ "# merged\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* meta-llama/Meta-Llama-3-8B", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merged\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* meta-llama/Meta-Llama-3-8B", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "265.54 +/- 21.18", "name": "mean_reward", "verified": false}]}]}]}
hossniper/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-23T15:01:50+00:00
[]
[]
TAGS #stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
reinforcement-learning
sample-factory
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r phoenixaiden33/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
{"library_name": "sample-factory", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "sample-factory"], "model-index": [{"name": "APPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "doom_health_gathering_supreme", "type": "doom_health_gathering_supreme"}, "metrics": [{"type": "mean_reward", "value": "11.18 +/- 4.44", "name": "mean_reward", "verified": false}]}]}]}
phoenixaiden33/rl_course_vizdoom_health_gathering_supreme
null
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-23T15:03:20+00:00
[]
[]
TAGS #sample-factory #tensorboard #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
A(n) APPO model trained on the doom_health_gathering_supreme environment. This model was trained using Sample-Factory 2.0: URL Documentation for how to use Sample-Factory can be found at URL ## Downloading the model After installing Sample-Factory, download the model with: ## Using the model To run the model after download, use the 'enjoy' script corresponding to this environment: You can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag. See URL for more details ## Training with this model To continue training with this model, use the 'train' script corresponding to this environment: Note, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at.
[ "## Downloading the model\n\nAfter installing Sample-Factory, download the model with:", "## Using the model\n\nTo run the model after download, use the 'enjoy' script corresponding to this environment:\n\n\n\nYou can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.\nSee URL for more details", "## Training with this model\n\nTo continue training with this model, use the 'train' script corresponding to this environment:\n\n\nNote, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at." ]
[ "TAGS\n#sample-factory #tensorboard #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "## Downloading the model\n\nAfter installing Sample-Factory, download the model with:", "## Using the model\n\nTo run the model after download, use the 'enjoy' script corresponding to this environment:\n\n\n\nYou can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.\nSee URL for more details", "## Training with this model\n\nTo continue training with this model, use the 'train' script corresponding to this environment:\n\n\nNote, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at." ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # themetagsv1 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1490 - Rouge1: 0.4434 - Rouge2: 0.2049 - Rougel: 0.4334 - Gen Len: 12.4621 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:-------:| | No log | 0.1 | 100 | 0.2118 | 0.3207 | 0.1074 | 0.3112 | 12.4598 | | No log | 0.19 | 200 | 0.1978 | 0.3409 | 0.1145 | 0.3283 | 12.4621 | | No log | 0.29 | 300 | 0.1849 | 0.3511 | 0.128 | 0.3421 | 12.4621 | | No log | 0.38 | 400 | 0.1795 | 0.3778 | 0.1458 | 0.3697 | 12.4621 | | No log | 0.48 | 500 | 0.1751 | 0.3797 | 0.1505 | 0.3696 | 12.4609 | | No log | 0.57 | 600 | 0.1723 | 0.3909 | 0.1569 | 0.3816 | 12.4621 | | No log | 0.67 | 700 | 0.1695 | 0.3911 | 0.1599 | 0.3851 | 12.4621 | | No log | 0.76 | 800 | 0.1668 | 0.3922 | 0.1555 | 0.384 | 12.4621 | | No log | 0.86 | 900 | 0.1636 | 0.3956 | 0.1585 | 0.3869 | 12.4621 | | No log | 0.96 | 1000 | 0.1632 | 0.4037 | 0.1705 | 0.3961 | 12.4621 | | No log | 1.05 | 1100 | 0.1610 | 0.4164 | 0.1807 | 0.4096 | 12.4621 | | No log | 1.15 | 1200 | 0.1593 | 0.416 | 0.1789 | 0.409 | 12.4621 | | No log | 1.24 | 1300 | 0.1583 | 0.4173 | 0.1839 | 0.4089 | 12.4621 | | No log | 1.34 | 1400 | 0.1573 | 0.4123 | 0.1752 | 0.4049 | 12.4621 | | No log | 1.43 | 1500 | 0.1561 | 0.4224 | 0.1861 | 0.4148 | 12.4621 | | No log | 1.53 | 1600 | 0.1558 | 0.4179 | 0.1821 | 0.4091 | 12.4621 | | No log | 1.63 | 1700 | 0.1542 | 0.4264 | 0.1861 | 0.4169 | 12.4621 | | No log | 1.72 | 1800 | 0.1539 | 0.4323 | 0.1926 | 0.4229 | 12.4621 | | No log | 1.82 | 1900 | 0.1526 | 0.4301 | 0.1917 | 0.4222 | 12.4621 | | No log | 1.91 | 2000 | 0.1521 | 0.4326 | 0.1965 | 0.423 | 12.4621 | | No log | 2.01 | 2100 | 0.1513 | 0.4309 | 0.1985 | 0.4226 | 12.4621 | | No log | 2.1 | 2200 | 0.1512 | 0.4287 | 0.1907 | 0.4184 | 12.4621 | | No log | 2.2 | 2300 | 0.1509 | 0.439 | 0.2 | 0.4302 | 12.4621 | | No log | 2.29 | 2400 | 0.1512 | 0.4397 | 0.202 | 0.4307 | 12.4621 | | No log | 2.39 | 2500 | 0.1506 | 0.4415 | 0.2068 | 0.4316 | 12.4621 | | No log | 2.49 | 2600 | 0.1504 | 0.4426 | 0.2072 | 0.4338 | 12.4621 | | No log | 2.58 | 2700 | 0.1500 | 0.4418 | 0.1994 | 0.4316 | 12.4621 | | No log | 2.68 | 2800 | 0.1500 | 0.4413 | 0.202 | 0.4308 | 12.4621 | | No log | 2.77 | 2900 | 0.1492 | 0.4392 | 0.2006 | 0.4297 | 12.4621 | | No log | 2.87 | 3000 | 0.1492 | 0.443 | 0.206 | 0.4329 | 12.4621 | | No log | 2.96 | 3100 | 0.1490 | 0.4434 | 0.2049 | 0.4334 | 12.4621 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "t5-small", "model-index": [{"name": "themetagsv1", "results": []}]}
hr-wesbeaver/themetagsv1
null
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T15:03:41+00:00
[]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
themetagsv1 =========== This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1490 * Rouge1: 0.4434 * Rouge2: 0.2049 * Rougel: 0.4334 * Gen Len: 12.4621 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0004 * train\_batch\_size: 10 * eval\_batch\_size: 10 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.16.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0004\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 10\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0004\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 10\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# nbeerbower/llama-3-dragonmaid-8B AWQ - Model creator: [nbeerbower](https://huggingface.co/nbeerbower) - Original model: [llama-3-dragonmaid-8B](https://huggingface.co/nbeerbower/llama-3-dragonmaid-8B) ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/llama-3-dragonmaid-8B-AWQ" system_message = "You are llama-3-dragonmaid-8B, incarnated as a powerful AI. You were created by nbeerbower." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
{"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
solidrust/llama-3-dragonmaid-8B-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "conversational", "text-generation-inference", "region:us" ]
null
2024-04-23T15:04:49+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #text-generation-inference #region-us
# nbeerbower/llama-3-dragonmaid-8B AWQ - Model creator: nbeerbower - Original model: llama-3-dragonmaid-8B ## How to use ### Install the necessary packages ### Example Python code ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - Text Generation Webui - using Loader: AutoAWQ - vLLM - version 0.2.2 or later for support for all model types. - Hugging Face Text Generation Inference (TGI) - Transformers version 4.35.0 and later, from any code or client that supports Transformers - AutoAWQ - for use from Python code
[ "# nbeerbower/llama-3-dragonmaid-8B AWQ\n\n- Model creator: nbeerbower\n- Original model: llama-3-dragonmaid-8B", "## How to use", "### Install the necessary packages", "### Example Python code", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #text-generation-inference #region-us \n", "# nbeerbower/llama-3-dragonmaid-8B AWQ\n\n- Model creator: nbeerbower\n- Original model: llama-3-dragonmaid-8B", "## How to use", "### Install the necessary packages", "### Example Python code", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code" ]
text-generation
transformers
# ResplendentAI/Aura_Uncensored_l3_8B AWQ - Model creator: [ResplendentAI](https://huggingface.co/ResplendentAI) - Original model: [Aura_Uncensored_l3_8B](https://huggingface.co/ResplendentAI/Aura_Uncensored_l3_8B) ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/Aura_Uncensored_l3_8B-AWQ" system_message = "You are Aura_Uncensored_l3_8B, incarnated as a powerful AI. You were created by ResplendentAI." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
{"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
solidrust/Aura_Uncensored_l3_8B-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "conversational", "text-generation-inference", "region:us" ]
null
2024-04-23T15:05:12+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #text-generation-inference #region-us
# ResplendentAI/Aura_Uncensored_l3_8B AWQ - Model creator: ResplendentAI - Original model: Aura_Uncensored_l3_8B ## How to use ### Install the necessary packages ### Example Python code ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - Text Generation Webui - using Loader: AutoAWQ - vLLM - version 0.2.2 or later for support for all model types. - Hugging Face Text Generation Inference (TGI) - Transformers version 4.35.0 and later, from any code or client that supports Transformers - AutoAWQ - for use from Python code
[ "# ResplendentAI/Aura_Uncensored_l3_8B AWQ\n\n- Model creator: ResplendentAI\n- Original model: Aura_Uncensored_l3_8B", "## How to use", "### Install the necessary packages", "### Example Python code", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #text-generation-inference #region-us \n", "# ResplendentAI/Aura_Uncensored_l3_8B AWQ\n\n- Model creator: ResplendentAI\n- Original model: Aura_Uncensored_l3_8B", "## How to use", "### Install the necessary packages", "### Example Python code", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
baraah/blip2-opt-2.7b-200rows
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:05:40+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# StenoType Type migration with large language models for code. Migrates JavaScript to TypeScript by predicting type annotations and generating type definitions. This model is based on [StarCoderBase-7b](https://huggingface.co/bigcode/starcoderbase-7b) and fine-tuned on TypeScript examples derived from [The Stack](https://huggingface.co/datasets/bigcode/the-stack-dedup). Please see the [GitHub](https://github.com/nuprl/StenoType/) repository for more information.
{"license": "bigscience-openrail-m", "extra_gated_prompt": "## Model License Agreement\nPlease read the BigCode [OpenRAIL-M license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) agreement before accepting it.\n ", "extra_gated_fields": {"I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements": "checkbox"}}
nuprl/stenotype
null
[ "transformers", "pytorch", "gpt_bigcode", "text-generation", "license:bigscience-openrail-m", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T15:05:51+00:00
[]
[]
TAGS #transformers #pytorch #gpt_bigcode #text-generation #license-bigscience-openrail-m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# StenoType Type migration with large language models for code. Migrates JavaScript to TypeScript by predicting type annotations and generating type definitions. This model is based on StarCoderBase-7b and fine-tuned on TypeScript examples derived from The Stack. Please see the GitHub repository for more information.
[ "# StenoType\n\nType migration with large language models for code. Migrates JavaScript to\nTypeScript by predicting type annotations and generating type definitions.\n\nThis model is based on StarCoderBase-7b\nand fine-tuned on TypeScript examples derived from The Stack.\n\nPlease see the GitHub repository for\nmore information." ]
[ "TAGS\n#transformers #pytorch #gpt_bigcode #text-generation #license-bigscience-openrail-m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# StenoType\n\nType migration with large language models for code. Migrates JavaScript to\nTypeScript by predicting type annotations and generating type definitions.\n\nThis model is based on StarCoderBase-7b\nand fine-tuned on TypeScript examples derived from The Stack.\n\nPlease see the GitHub repository for\nmore information." ]
text-generation
transformers
## About Quantization 我们使用modelscope [swift](https://github.com/modelscope/swift/)仓库进行AWQ量化. 量化文档可以查看[这里](https://github.com/modelscope/swift/blob/main/docs/source/LLM/LLM%E9%87%8F%E5%8C%96%E6%96%87%E6%A1%A3.md). 量化命令如下: We use the modelscope [swift](https://github.com/modelscope/swift/) repository to perform AWQ quantization. Quantization documentation can be found [here](https://github.com/modelscope/swift/blob/main/docs/source_en/LLM/LLM-quantization.md). The quantization command is as follows: ```bash # Experimental Environment: A100 CUDA_VISIBLE_DEVICES=0 swift export \ --model_type llama3-70b-instruct --quant_bits 4 \ --dataset sharegpt-gpt4-mini --quant_method awq --quant_seqlen 2048 --quant_n_samples 16 ``` Inference: ```bash CUDA_VISIBLE_DEVICES=0 swift infer --model_type llama3-70b-instruct-awq ``` SFT: ```bash CUDA_VISIBLE_DEVICES=0 swift sft --model_type llama3-70b-instruct-awq --dataset leetcode-python-en ``` ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-70B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-70B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-70B-Instruct --include "original/*" --local-dir Meta-Llama-3-70B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
{"language": ["en"], "license": "other", "tags": ["awq", "int4", "llama3", "facebook", "meta", "pytorch", "llama", "llama-3"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"}
study-hjt/Meta-Llama-3-70B-Instruct-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "awq", "int4", "llama3", "facebook", "meta", "pytorch", "llama-3", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-23T15:06:21+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #awq #int4 #llama3 #facebook #meta #pytorch #llama-3 #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
About Quantization ------------------ 我们使用modelscope swift仓库进行AWQ量化. 量化文档可以查看这里. 量化命令如下: We use the modelscope swift repository to perform AWQ quantization. Quantization documentation can be found here. The quantization command is as follows: Inference: SFT: Model Details ------------- Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. Model developers Meta Variations Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. Input Models input text only. Output Models generate text and code only. Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. Llama 3 family of models. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. Model Release Date April 18, 2024. Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. License A custom commercial license is available at: URL Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go here. Intended Use ------------ Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English. Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. How to use ---------- This repository contains two versions of Meta-Llama-3-70B-Instruct, for use with transformers and with the original 'llama3' codebase. ### Use with transformers See the snippet below for usage with Transformers: ### Use with 'llama3' Please, follow the instructions in the repository. To download Original checkpoints, see the example command below leveraging 'huggingface-cli': For Hugging Face support, we recommend using transformers or TGI, but a similar command works. Hardware and Software --------------------- Training Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. Carbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. CO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. Training Data ------------- Overview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. Data Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. Benchmarks ---------- In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here. ### Base pretrained models ### Instruction tuned models ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. Safety For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. Refusals In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL #### Critical risks CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### Cyber Security We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability. ### Child Safety Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository. Finally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community. Ethical Considerations and Limitations -------------------------------------- The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at URL instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {URL } Contributors ------------ Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
[ "### Use with transformers\n\n\nSee the snippet below for usage with Transformers:", "### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository.\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.", "### Base pretrained models", "### Instruction tuned models", "### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.", "#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.", "#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL", "#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).", "### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.", "### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.", "### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #awq #int4 #llama3 #facebook #meta #pytorch #llama-3 #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "### Use with transformers\n\n\nSee the snippet below for usage with Transformers:", "### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository.\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.", "### Base pretrained models", "### Instruction tuned models", "### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.", "#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.", "#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL", "#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).", "### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.", "### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.", "### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_clm-model This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the eli5_category dataset. It achieves the following results on the evaluation set: - Loss: 3.5847 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.6947 | 1.0 | 1308 | 3.5892 | | 3.5793 | 2.0 | 2616 | 3.5833 | | 3.5287 | 3.0 | 3924 | 3.5847 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["eli5_category"], "base_model": "gpt2", "model-index": [{"name": "my_awesome_eli5_clm-model", "results": []}]}
Balibata/my_awesome_eli5_clm-model
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "dataset:eli5_category", "base_model:gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T15:06:41+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #dataset-eli5_category #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
my\_awesome\_eli5\_clm-model ============================ This model is a fine-tuned version of gpt2 on the eli5\_category dataset. It achieves the following results on the evaluation set: * Loss: 3.5847 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3.0 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #dataset-eli5_category #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral7binstruct_summarize This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.4769 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.7025 | 0.2174 | 25 | 1.5594 | | 1.5426 | 0.4348 | 50 | 1.4769 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistral7binstruct_summarize", "results": []}]}
JulsdL/mistral7binstruct_summarize
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-04-23T15:07:24+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
mistral7binstruct\_summarize ============================ This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset. It achieves the following results on the evaluation set: * Loss: 1.4769 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 1 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: constant * lr\_scheduler\_warmup\_steps: 0.03 * training\_steps: 50 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 50", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 50", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
TwentyNine/byt5-ain-kana-latin-converter-v2
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2024-04-23T15:08:07+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/nmdr/Llama-3-8B-Instruct-Physics-2k-Mufasa <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Physics-2k-Mufasa-GGUF/resolve/main/Llama-3-8B-Instruct-Physics-2k-Mufasa.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Physics-2k-Mufasa-GGUF/resolve/main/Llama-3-8B-Instruct-Physics-2k-Mufasa.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Physics-2k-Mufasa-GGUF/resolve/main/Llama-3-8B-Instruct-Physics-2k-Mufasa.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Physics-2k-Mufasa-GGUF/resolve/main/Llama-3-8B-Instruct-Physics-2k-Mufasa.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Physics-2k-Mufasa-GGUF/resolve/main/Llama-3-8B-Instruct-Physics-2k-Mufasa.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Physics-2k-Mufasa-GGUF/resolve/main/Llama-3-8B-Instruct-Physics-2k-Mufasa.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Physics-2k-Mufasa-GGUF/resolve/main/Llama-3-8B-Instruct-Physics-2k-Mufasa.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Physics-2k-Mufasa-GGUF/resolve/main/Llama-3-8B-Instruct-Physics-2k-Mufasa.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Physics-2k-Mufasa-GGUF/resolve/main/Llama-3-8B-Instruct-Physics-2k-Mufasa.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Physics-2k-Mufasa-GGUF/resolve/main/Llama-3-8B-Instruct-Physics-2k-Mufasa.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Physics-2k-Mufasa-GGUF/resolve/main/Llama-3-8B-Instruct-Physics-2k-Mufasa.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Physics-2k-Mufasa-GGUF/resolve/main/Llama-3-8B-Instruct-Physics-2k-Mufasa.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Physics-2k-Mufasa-GGUF/resolve/main/Llama-3-8B-Instruct-Physics-2k-Mufasa.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Physics-2k-Mufasa-GGUF/resolve/main/Llama-3-8B-Instruct-Physics-2k-Mufasa.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-Physics-2k-Mufasa-GGUF/resolve/main/Llama-3-8B-Instruct-Physics-2k-Mufasa.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "tags": [], "base_model": "nmdr/Llama-3-8B-Instruct-Physics-2k-Mufasa", "quantized_by": "mradermacher"}
mradermacher/Llama-3-8B-Instruct-Physics-2k-Mufasa-GGUF
null
[ "transformers", "gguf", "en", "base_model:nmdr/Llama-3-8B-Instruct-Physics-2k-Mufasa", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:08:48+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-nmdr/Llama-3-8B-Instruct-Physics-2k-Mufasa #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-nmdr/Llama-3-8B-Instruct-Physics-2k-Mufasa #endpoints_compatible #region-us \n" ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Phi-3-mini-4k-instruct-GGUF ## Original Model [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ## Run with LlamaEdge - LlamaEdge version: [v0.8.4](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.8.4) and above - Prompt template - Prompt type: `phi-3-chat` - Prompt string ```text <|system|> {system_message}<|end|> <|user|> {user_message_1}<|end|> <|assistant|> {assistant_message_1}<|end|> <|user|> {user_message_2}<|end|> <|assistant|> ``` - Reverse prompt: `<|end|>` - Context size: `3072` - Run as LlamaEdge service ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:Phi-3-mini-4k-instruct-Q5_K_M.gguf \ llama-api-server.wasm \ --prompt-template phi-3-chat \ --ctx-size 3072 \ --model-name phi-3-mini ``` - Run as LlamaEdge command app ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:Phi-3-mini-4k-instruct-Q5_K_M.gguf \ llama-chat.wasm \ --prompt-template phi-3-chat \ --ctx-size 3072 \ ``` ## Quantized GGUF Models | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [Phi-3-mini-4k-instruct-Q2_K.gguf](https://huggingface.co/second-state/Phi-3-mini-4k-instruct-GGUF/blob/main/Phi-3-mini-4k-instruct-Q2_K.gguf) | Q2_K | 2 | 1.42 GB| smallest, significant quality loss - not recommended for most purposes | | [Phi-3-mini-4k-instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Phi-3-mini-4k-instruct-GGUF/blob/main/Phi-3-mini-4k-instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 2.09 GB| small, substantial quality loss | | [Phi-3-mini-4k-instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Phi-3-mini-4k-instruct-GGUF/blob/main/Phi-3-mini-4k-instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 1.96 GB| very small, high quality loss | | [Phi-3-mini-4k-instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Phi-3-mini-4k-instruct-GGUF/blob/main/Phi-3-mini-4k-instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 1.68 GB| very small, high quality loss | | [Phi-3-mini-4k-instruct-Q4_0.gguf](https://huggingface.co/second-state/Phi-3-mini-4k-instruct-GGUF/blob/main/Phi-3-mini-4k-instruct-Q4_0.gguf) | Q4_0 | 4 | 2.18 GB| legacy; small, very high quality loss - prefer using Q3_K_M | | [Phi-3-mini-4k-instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Phi-3-mini-4k-instruct-GGUF/blob/main/Phi-3-mini-4k-instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 2.39 GB| medium, balanced quality - recommended | | [Phi-3-mini-4k-instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Phi-3-mini-4k-instruct-GGUF/blob/main/Phi-3-mini-4k-instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 2.19 GB| small, greater quality loss | | [Phi-3-mini-4k-instruct-Q5_0.gguf](https://huggingface.co/second-state/Phi-3-mini-4k-instruct-GGUF/blob/main/Phi-3-mini-4k-instruct-Q5_0.gguf) | Q5_0 | 5 | 2.64 GB| legacy; medium, balanced quality - prefer using Q4_K_M | | [Phi-3-mini-4k-instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Phi-3-mini-4k-instruct-GGUF/blob/main/Phi-3-mini-4k-instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 2.82 GB| large, very low quality loss - recommended | | [Phi-3-mini-4k-instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Phi-3-mini-4k-instruct-GGUF/blob/main/Phi-3-mini-4k-instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 2.64 GB| large, low quality loss - recommended | | [Phi-3-mini-4k-instruct-Q6_K.gguf](https://huggingface.co/second-state/Phi-3-mini-4k-instruct-GGUF/blob/main/Phi-3-mini-4k-instruct-Q6_K.gguf) | Q6_K | 6 | 3.14 GB| very large, extremely low quality loss | | [Phi-3-mini-4k-instruct-Q8_0.gguf](https://huggingface.co/second-state/Phi-3-mini-4k-instruct-GGUF/blob/main/Phi-3-mini-4k-instruct-Q8_0.gguf) | Q8_0 | 8 | 4.06 GB| very large, extremely low quality loss - not recommended | | [Phi-3-mini-4k-instruct-f16.gguf](https://huggingface.co/second-state/Phi-3-mini-4k-instruct-GGUF/blob/main/Phi-3-mini-4k-instruct-f16.gguf) | f16 | 16 | 7.64 GB| | *Quantized with llama.cpp b2717.*
{"language": ["en"], "license": "mit", "tags": ["nlp", "code"], "model_name": "Phi 3 mini 4k instruct", "base_model": "microsoft/Phi-3-mini-4k-instruct", "license_link": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation", "model_creator": "Microsoft", "model_type": "phi-msft", "quantized_by": "Second State Inc."}
second-state/Phi-3-mini-4k-instruct-GGUF
null
[ "transformers", "gguf", "phi3", "text-generation", "nlp", "code", "custom_code", "en", "base_model:microsoft/Phi-3-mini-4k-instruct", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:11:30+00:00
[]
[ "en" ]
TAGS #transformers #gguf #phi3 #text-generation #nlp #code #custom_code #en #base_model-microsoft/Phi-3-mini-4k-instruct #license-mit #autotrain_compatible #endpoints_compatible #region-us
![](URL style=) --- Phi-3-mini-4k-instruct-GGUF =========================== Original Model -------------- microsoft/Phi-3-mini-4k-instruct Run with LlamaEdge ------------------ * LlamaEdge version: v0.8.4 and above * Prompt template + Prompt type: 'phi-3-chat' + Prompt string + Reverse prompt: '<|end|>' * Context size: '3072' * Run as LlamaEdge service * Run as LlamaEdge command app Quantized GGUF Models --------------------- *Quantized with URL b2717.*
[]
[ "TAGS\n#transformers #gguf #phi3 #text-generation #nlp #code #custom_code #en #base_model-microsoft/Phi-3-mini-4k-instruct #license-mit #autotrain_compatible #endpoints_compatible #region-us \n" ]
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PolizzeDonut-UltimaProvaCluster-Cluster4di4-5epochs-Resol918x1286 This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "PolizzeDonut-UltimaProvaCluster-Cluster4di4-5epochs-Resol918x1286", "results": []}]}
tedad09/PolizzeDonut-UltimaProvaCluster-Cluster4di4-5epochs-Resol918x1286
null
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:13:41+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
# PolizzeDonut-UltimaProvaCluster-Cluster4di4-5epochs-Resol918x1286 This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# PolizzeDonut-UltimaProvaCluster-Cluster4di4-5epochs-Resol918x1286\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n", "# PolizzeDonut-UltimaProvaCluster-Cluster4di4-5epochs-Resol918x1286\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
multiple-choice
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NLPGroupProject-Finetune-bio-mobilebert This model is a fine-tuned version of [nlpie/bio-mobilebert](https://huggingface.co/nlpie/bio-mobilebert) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9925 - Accuracy: 0.737 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.25 | 250 | 0.8564 | 0.705 | | 12.045 | 0.5 | 500 | 0.7663 | 0.726 | | 12.045 | 0.75 | 750 | 0.7659 | 0.707 | | 0.8388 | 1.0 | 1000 | 0.7144 | 0.737 | | 0.8388 | 1.25 | 1250 | 0.7986 | 0.734 | | 0.658 | 1.5 | 1500 | 0.8002 | 0.728 | | 0.658 | 1.75 | 1750 | 0.7685 | 0.736 | | 0.6945 | 2.0 | 2000 | 0.7751 | 0.738 | | 0.6945 | 2.25 | 2250 | 1.2388 | 0.73 | | 0.5058 | 2.5 | 2500 | 1.1562 | 0.733 | | 0.5058 | 2.75 | 2750 | 0.9343 | 0.736 | | 0.5251 | 3.0 | 3000 | 0.9925 | 0.737 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "nlpie/bio-mobilebert", "model-index": [{"name": "NLPGroupProject-Finetune-bio-mobilebert", "results": []}]}
BenjaminTT/NLPGroupProject-Finetune-bio-mobilebert
null
[ "transformers", "safetensors", "mobilebert", "multiple-choice", "generated_from_trainer", "base_model:nlpie/bio-mobilebert", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:14:00+00:00
[]
[]
TAGS #transformers #safetensors #mobilebert #multiple-choice #generated_from_trainer #base_model-nlpie/bio-mobilebert #license-mit #endpoints_compatible #region-us
NLPGroupProject-Finetune-bio-mobilebert ======================================= This model is a fine-tuned version of nlpie/bio-mobilebert on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.9925 * Accuracy: 0.737 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.2+cu118 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #mobilebert #multiple-choice #generated_from_trainer #base_model-nlpie/bio-mobilebert #license-mit #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/BAAI/Aquila2-70B-Expr <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Aquila2-70B-Expr-GGUF/resolve/main/Aquila2-70B-Expr.Q2_K.gguf) | Q2_K | 26.2 | | | [GGUF](https://huggingface.co/mradermacher/Aquila2-70B-Expr-GGUF/resolve/main/Aquila2-70B-Expr.IQ3_XS.gguf) | IQ3_XS | 29.1 | | | [GGUF](https://huggingface.co/mradermacher/Aquila2-70B-Expr-GGUF/resolve/main/Aquila2-70B-Expr.IQ3_S.gguf) | IQ3_S | 30.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Aquila2-70B-Expr-GGUF/resolve/main/Aquila2-70B-Expr.Q3_K_S.gguf) | Q3_K_S | 30.7 | | | [GGUF](https://huggingface.co/mradermacher/Aquila2-70B-Expr-GGUF/resolve/main/Aquila2-70B-Expr.IQ3_M.gguf) | IQ3_M | 31.7 | | | [GGUF](https://huggingface.co/mradermacher/Aquila2-70B-Expr-GGUF/resolve/main/Aquila2-70B-Expr.Q3_K_M.gguf) | Q3_K_M | 34.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Aquila2-70B-Expr-GGUF/resolve/main/Aquila2-70B-Expr.Q3_K_L.gguf) | Q3_K_L | 36.9 | | | [GGUF](https://huggingface.co/mradermacher/Aquila2-70B-Expr-GGUF/resolve/main/Aquila2-70B-Expr.IQ4_XS.gguf) | IQ4_XS | 38.1 | | | [GGUF](https://huggingface.co/mradermacher/Aquila2-70B-Expr-GGUF/resolve/main/Aquila2-70B-Expr.Q4_K_S.gguf) | Q4_K_S | 40.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Aquila2-70B-Expr-GGUF/resolve/main/Aquila2-70B-Expr.Q4_K_M.gguf) | Q4_K_M | 42.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Aquila2-70B-Expr-GGUF/resolve/main/Aquila2-70B-Expr.Q5_K_S.gguf) | Q5_K_S | 48.4 | | | [GGUF](https://huggingface.co/mradermacher/Aquila2-70B-Expr-GGUF/resolve/main/Aquila2-70B-Expr.Q5_K_M.gguf) | Q5_K_M | 49.7 | | | [PART 1](https://huggingface.co/mradermacher/Aquila2-70B-Expr-GGUF/resolve/main/Aquila2-70B-Expr.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Aquila2-70B-Expr-GGUF/resolve/main/Aquila2-70B-Expr.Q6_K.gguf.part2of2) | Q6_K | 57.6 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Aquila2-70B-Expr-GGUF/resolve/main/Aquila2-70B-Expr.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Aquila2-70B-Expr-GGUF/resolve/main/Aquila2-70B-Expr.Q8_0.gguf.part2of2) | Q8_0 | 74.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "other", "library_name": "transformers", "base_model": "BAAI/Aquila2-70B-Expr", "quantized_by": "mradermacher"}
mradermacher/Aquila2-70B-Expr-GGUF
null
[ "transformers", "gguf", "en", "base_model:BAAI/Aquila2-70B-Expr", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:14:16+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-BAAI/Aquila2-70B-Expr #license-other #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-BAAI/Aquila2-70B-Expr #license-other #endpoints_compatible #region-us \n" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trained_polish This model is a fine-tuned version of [distilbert/distilbert-base-multilingual-cased](https://huggingface.co/distilbert/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1797 - Precision: 0.8868 - Recall: 0.8974 - F1: 0.8921 - Accuracy: 0.9525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3688 | 1.0 | 625 | 0.1956 | 0.8580 | 0.8764 | 0.8671 | 0.9431 | | 0.1652 | 2.0 | 1250 | 0.1748 | 0.8845 | 0.8891 | 0.8868 | 0.9506 | | 0.1274 | 3.0 | 1875 | 0.1797 | 0.8868 | 0.8974 | 0.8921 | 0.9525 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2+cu118 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "distilbert/distilbert-base-multilingual-cased", "model-index": [{"name": "trained_polish", "results": []}]}
annamariagnat/trained_polish
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:14:24+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert/distilbert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
trained\_polish =============== This model is a fine-tuned version of distilbert/distilbert-base-multilingual-cased on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1797 * Precision: 0.8868 * Recall: 0.8974 * F1: 0.8921 * Accuracy: 0.9525 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.1.2+cu118 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert/distilbert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: bfloat16 merge_method: passthrough slices: - sources: - layer_range: [0, 19] model: model: path: meta-llama/Meta-Llama-3-8B - sources: - layer_range: [3, 32] model: model: path: meta-llama/Meta-Llama-3-8B ```
{"license": "llama3", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["meta-llama/Meta-Llama-3-8B"]}
ChuGyouk/Llama-3-11.5B-modified-DUS-nocpt
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T15:14:24+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #mergekit #merge #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merged This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * meta-llama/Meta-Llama-3-8B ### Configuration The following YAML configuration was used to produce this model:
[ "# merged\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* meta-llama/Meta-Llama-3-8B", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merged\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* meta-llama/Meta-Llama-3-8B", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
stvhuang/rcr-run-5pqr6lwp-90396-master-0_20240402T105012-ep30
null
[ "transformers", "safetensors", "xlm-roberta", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:14:42+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #xlm-roberta #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
Merged [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b) and [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) because I thought the Dolphin finetune was a bit too 'robot-y' in the answers. GGUF files can be found here: [RDson/Dolphin-less-Llama-3-Instruct-8B-GGUF](https://huggingface.co/RDson/Dolphin-less-Llama-3-Instruct-8B-GGUF). Mergekit yaml: ``` tokenizer_source: union slices: - sources: - model: ollama/llama3/Meta-Llama-3-8B-Instruct layer_range: [0, 32] - model: dolphin-2.9-llama3-8b layer_range: [0, 32] parameters: weight: 0.75 merge_method: slerp base_model: ollama/llama3/Meta-Llama-3-8B-Instruct parameters: normalize: true embed_slerp: true t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
{"license": "other", "license_name": "llama-3", "license_link": "https://llama.meta.com/llama3/license"}
RDson/Dolphin-less-Llama-3-Instruct-8B
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T15:15:41+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Merged cognitivecomputations/dolphin-2.9-llama3-8b and meta-llama/Meta-Llama-3-8B-Instruct because I thought the Dolphin finetune was a bit too 'robot-y' in the answers. GGUF files can be found here: RDson/Dolphin-less-Llama-3-Instruct-8B-GGUF. Mergekit yaml:
[]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-to-image
diffusers
# olivia-casta-xl <Gallery /> ## Model description This is a LoRa of Olivia Casta, a Fansly model. Will produce NSFW and Sfw images. By CerberusAI ## Trigger words You should use `olivia` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/MarkBW/olivia-casta-xl/tree/main) them in the Files & versions tab.
{"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "-", "output": {"url": "images/2024-04-21_16-14-29_5619.jpeg"}}], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "olivia"}
MarkBW/olivia-casta-xl
null
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
null
2024-04-23T15:16:34+00:00
[]
[]
TAGS #diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us
# olivia-casta-xl <Gallery /> ## Model description This is a LoRa of Olivia Casta, a Fansly model. Will produce NSFW and Sfw images. By CerberusAI ## Trigger words You should use 'olivia' to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab.
[ "# olivia-casta-xl\n\n<Gallery />", "## Model description \n\nThis is a LoRa of Olivia Casta, a Fansly model. Will produce NSFW and Sfw images. By CerberusAI", "## Trigger words\n\nYou should use 'olivia' to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
[ "TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us \n", "# olivia-casta-xl\n\n<Gallery />", "## Model description \n\nThis is a LoRa of Olivia Casta, a Fansly model. Will produce NSFW and Sfw images. By CerberusAI", "## Trigger words\n\nYou should use 'olivia' to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-G2 This model is a fine-tuned version of [ChakuChidiya/distilbert-base-uncased-G1](https://huggingface.co/ChakuChidiya/distilbert-base-uncased-G1) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1971, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.07} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.37.0 - TensorFlow 2.15.0 - Datasets 2.14.5 - Tokenizers 0.15.1
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "ChakuChidiya/distilbert-base-uncased-G1", "model-index": [{"name": "distilbert-base-uncased-G2", "results": []}]}
ChakuChidiya/distilbert-base-uncased-G2
null
[ "transformers", "tf", "distilbert", "token-classification", "generated_from_keras_callback", "base_model:ChakuChidiya/distilbert-base-uncased-G1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:16:45+00:00
[]
[]
TAGS #transformers #tf #distilbert #token-classification #generated_from_keras_callback #base_model-ChakuChidiya/distilbert-base-uncased-G1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# distilbert-base-uncased-G2 This model is a fine-tuned version of ChakuChidiya/distilbert-base-uncased-G1 on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1971, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.07} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.37.0 - TensorFlow 2.15.0 - Datasets 2.14.5 - Tokenizers 0.15.1
[ "# distilbert-base-uncased-G2\n\nThis model is a fine-tuned version of ChakuChidiya/distilbert-base-uncased-G1 on an unknown dataset.\nIt achieves the following results on the evaluation set:", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1971, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.07}\n- training_precision: float32", "### Training results", "### Framework versions\n\n- Transformers 4.37.0\n- TensorFlow 2.15.0\n- Datasets 2.14.5\n- Tokenizers 0.15.1" ]
[ "TAGS\n#transformers #tf #distilbert #token-classification #generated_from_keras_callback #base_model-ChakuChidiya/distilbert-base-uncased-G1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# distilbert-base-uncased-G2\n\nThis model is a fine-tuned version of ChakuChidiya/distilbert-base-uncased-G1 on an unknown dataset.\nIt achieves the following results on the evaluation set:", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1971, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.07}\n- training_precision: float32", "### Training results", "### Framework versions\n\n- Transformers 4.37.0\n- TensorFlow 2.15.0\n- Datasets 2.14.5\n- Tokenizers 0.15.1" ]
text-generation
transformers
## Como Utilizar ``` import transformers import torch model_id = "adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.3" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="auto", ) messages = [ {"role": "system", "content": "Você é um robô pirata que sempre responde como um pirata deveria!"}, {"role": "user", "content": "Quem é você?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|im_end|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ```
{"language": ["pt"], "datasets": ["adalbertojunior/openHermes_portuguese"]}
adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.3
null
[ "transformers", "safetensors", "llama", "text-generation", "pt", "dataset:adalbertojunior/openHermes_portuguese", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T15:17:32+00:00
[]
[ "pt" ]
TAGS #transformers #safetensors #llama #text-generation #pt #dataset-adalbertojunior/openHermes_portuguese #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
## Como Utilizar
[ "## Como Utilizar" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #pt #dataset-adalbertojunior/openHermes_portuguese #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## Como Utilizar" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "codeparrot/codeparrot"}
mingyue0101/codeparrot-model-instruct
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:codeparrot/codeparrot", "region:us" ]
null
2024-04-23T15:17:47+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-codeparrot/codeparrot #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-codeparrot/codeparrot #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral7binstruct_summarize This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.4542 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.5238 | 0.2174 | 25 | 1.4682 | | 1.5249 | 0.4348 | 50 | 1.4542 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistral7binstruct_summarize", "results": []}]}
ipbrennan/mistral7binstruct_summarize
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-04-23T15:18:51+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
mistral7binstruct\_summarize ============================ This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset. It achieves the following results on the evaluation set: * Loss: 1.4542 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 1 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: constant * lr\_scheduler\_warmup\_steps: 0.03 * training\_steps: 50 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 50", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 50", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cemt/OrpoLlama-3-8B
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T15:19:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0
{"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"}
bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned-adapters_Aleatoric_tiny_0.2_Seed103
null
[ "peft", "arxiv:1910.09700", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "region:us" ]
null
2024-04-23T15:20:09+00:00
[ "1910.09700" ]
[]
TAGS #peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ## Training procedure The following 'bitsandbytes' quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0 ## Training procedure The following 'bitsandbytes' quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16", "### Framework versions\n\n\n- PEFT 0.7.0.dev0", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16", "### Framework versions\n\n\n- PEFT 0.7.0.dev0" ]
[ "TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16", "### Framework versions\n\n\n- PEFT 0.7.0.dev0", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16", "### Framework versions\n\n\n- PEFT 0.7.0.dev0" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0
{"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"}
bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned_Aleatoric_tiny_0.2_Seed103
null
[ "peft", "arxiv:1910.09700", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "region:us" ]
null
2024-04-23T15:20:16+00:00
[ "1910.09700" ]
[]
TAGS #peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ## Training procedure The following 'bitsandbytes' quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16", "### Framework versions\n\n\n- PEFT 0.7.0.dev0" ]
[ "TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16", "### Framework versions\n\n\n- PEFT 0.7.0.dev0" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-2-new This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "phi-2-new", "results": []}]}
pefanis27/phi-2-new
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-04-23T15:20:17+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
# phi-2-new This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# phi-2-new\n\nThis model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.2.2+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n", "# phi-2-new\n\nThis model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.2.2+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
text-generation
transformers
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a28db2f1968b7d7f357182/xXTKoRbBORy7QFWAeaBxh.png) *This model was quantized by [SanctumAI](https://sanctum.ai). To leave feedback, join our community in [Discord](https://discord.gg/7ZNE78HJKh).* # Phi 3 Mini 4K Instruct GGUF **Model creator:** [microsoft](https://huggingface.co/microsoft)<br> **Original model**: [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)<br> ## Model Summary: The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters. ## Prompt Template: If you're using Sanctum app, simply use `Phi 3` model preset. Prompt template: ``` <|system|> {system_prompt}.<|end|> <|user|> {prompt}<|end|> <|assistant|> ``` ## Hardware Requirements Estimate | Name | Quant method | Size | Memory (RAM, vRAM) required | | ---- | ---- | ---- | ---- | | [phi-3-mini-4k-instruct.Q2_K.gguf](https://huggingface.co/SanctumAI/Phi-3-mini-4k-instruct-GGUF/blob/main/phi-3-mini-4k-instruct.Q2_K.gguf) | Q2_K | 1.45 GB | 5.05 GB | | [phi-3-mini-4k-instruct.Q3_K_S.gguf](https://huggingface.co/SanctumAI/Phi-3-mini-4k-instruct-GGUF/blob/main/phi-3-mini-4k-instruct.Q3_K_S.gguf) | Q3_K_S | 1.68 GB | 5.27 GB | | [phi-3-mini-4k-instruct.Q3_K_M.gguf](https://huggingface.co/SanctumAI/Phi-3-mini-4k-instruct-GGUF/blob/main/phi-3-mini-4k-instruct.Q3_K_M.gguf) | Q3_K_M | 1.88 GB | 5.45 GB | | [phi-3-mini-4k-instruct.Q3_K_L.gguf](https://huggingface.co/SanctumAI/Phi-3-mini-4k-instruct-GGUF/blob/main/phi-3-mini-4k-instruct.Q3_K_L.gguf) | Q3_K_L | 2.05 GB | 5.61 GB | | [phi-3-mini-4k-instruct.Q4_0.gguf](https://huggingface.co/SanctumAI/Phi-3-mini-4k-instruct-GGUF/blob/main/phi-3-mini-4k-instruct.Q4_0.gguf) | Q4_0 | 2.18 GB | 5.73 GB | | [phi-3-mini-4k-instruct.Q4_K_S.gguf](https://huggingface.co/SanctumAI/Phi-3-mini-4k-instruct-GGUF/blob/main/phi-3-mini-4k-instruct.Q4_0.gguf) | Q4_K_S | 2.19 GB | 5.74 GB | | [phi-3-mini-4k-instruct.Q4_K_M.gguf](https://huggingface.co/SanctumAI/Phi-3-mini-4k-instruct-GGUF/blob/main/phi-3-mini-4k-instruct.Q4_K_M.gguf) | Q4_K_M | 2.32 GB | 5.86 GB | | [phi-3-mini-4k-instruct.Q4_K.gguf](https://huggingface.co/SanctumAI/Phi-3-mini-4k-instruct-GGUF/blob/main/phi-3-mini-4k-instruct.Q4_K.gguf) | Q4_K | 2.32 GB | 5.86 GB | | [phi-3-mini-4k-instruct.Q4_1.gguf](https://huggingface.co/SanctumAI/Phi-3-mini-4k-instruct-GGUF/blob/main/phi-3-mini-4k-instruct.Q4_1.gguf) | Q4_1 | 2.41 GB | 5.94 GB | | [phi-3-mini-4k-instruct.Q5_0.gguf](https://huggingface.co/SanctumAI/Phi-3-mini-4k-instruct-GGUF/blob/main/phi-3-mini-4k-instruct.Q5_0.gguf) | Q5_0 | 2.64 GB | 6.16 GB | | [phi-3-mini-4k-instruct.Q5_K_S.gguf](https://huggingface.co/SanctumAI/Phi-3-mini-4k-instruct-GGUF/blob/main/phi-3-mini-4k-instruct.Q5_K_S.gguf) | Q5_K_S | 2.64 GB | 6.16 GB | | [phi-3-mini-4k-instruct.Q5_K_M.gguf](https://huggingface.co/SanctumAI/Phi-3-mini-4k-instruct-GGUF/blob/main/phi-3-mini-4k-instruct.Q5_K_M.gguf) | Q5_K_M | 2.72 GB | 6.23 GB | | [phi-3-mini-4k-instruct.Q5_K.gguf](https://huggingface.co/SanctumAI/Phi-3-mini-4k-instruct-GGUF/blob/main/phi-3-mini-4k-instruct.Q5_K.gguf) | Q5_K | 2.72 GB | 6.23 GB | | [phi-3-mini-4k-instruct.Q5_1.gguf](https://huggingface.co/SanctumAI/Phi-3-mini-4k-instruct-GGUF/blob/main/phi-3-mini-4k-instruct.Q5_1.gguf) | Q5_1 | 2.87 GB | 6.38 GB | | [phi-3-mini-4k-instruct.Q6_K.gguf](https://huggingface.co/SanctumAI/Phi-3-mini-4k-instruct-GGUF/blob/main/phi-3-mini-4k-instruct.Q6_K.gguf) | Q6_K | 3.14 GB | 6.62 GB | | [phi-3-mini-4k-instruct.Q8_0.gguf](https://huggingface.co/SanctumAI/Phi-3-mini-4k-instruct-GGUF/blob/main/phi-3-mini-4k-instruct.Q8_0.gguf) | Q8_0 | 4.06 GB | 7.48 GB | | [phi-3-mini-4k-instruct.fp16.gguf](https://huggingface.co/SanctumAI/Phi-3-mini-4k-instruct-GGUF/blob/main/phi-3-mini-4k-instruct.fp16.gguf) | f16 | 7.64 GB | 10.82 GB | ## Disclaimer Sanctum is not the creator, originator, or owner of any Model featured in the Models section of the Sanctum application. Each Model is created and provided by third parties. Sanctum does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Model listed there. You understand that supported Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Model is the sole responsibility of the person or entity who originated such Model. Sanctum may not monitor or control the Models supported and cannot, and does not, take responsibility for any such Model. Sanctum disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Models. Sanctum further disclaims any warranty that the Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Models, your downloading of any Model, or use of any other Model provided by or through Sanctum.
{"language": ["en"], "license": "mit", "tags": ["nlp", "code"], "pipeline_tag": "text-generation", "license_link": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE"}
SanctumAI/Phi-3-mini-4k-instruct-GGUF
null
[ "transformers", "gguf", "phi3", "nlp", "code", "text-generation", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:22:44+00:00
[]
[ "en" ]
TAGS #transformers #gguf #phi3 #nlp #code #text-generation #en #license-mit #endpoints_compatible #region-us
!image/png *This model was quantized by SanctumAI. To leave feedback, join our community in Discord.* Phi 3 Mini 4K Instruct GGUF =========================== Model creator: microsoft Original model: Phi-3-mini-4k-instruct Model Summary: -------------- The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters. Prompt Template: ---------------- If you're using Sanctum app, simply use 'Phi 3' model preset. Prompt template: Hardware Requirements Estimate ------------------------------ Disclaimer ---------- Sanctum is not the creator, originator, or owner of any Model featured in the Models section of the Sanctum application. Each Model is created and provided by third parties. Sanctum does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Model listed there. You understand that supported Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Model is the sole responsibility of the person or entity who originated such Model. Sanctum may not monitor or control the Models supported and cannot, and does not, take responsibility for any such Model. Sanctum disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Models. Sanctum further disclaims any warranty that the Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Models, your downloading of any Model, or use of any other Model provided by or through Sanctum.
[]
[ "TAGS\n#transformers #gguf #phi3 #nlp #code #text-generation #en #license-mit #endpoints_compatible #region-us \n" ]
text-generation
transformers
# merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: bfloat16 merge_method: passthrough slices: - sources: - layer_range: [0, 8] model: model: path: meta-llama/Meta-Llama-3-8B - sources: - layer_range: [8, 16] model: model: path: meta-llama/Meta-Llama-3-8B - sources: - layer_range: [8, 16] model: model: path: meta-llama/Meta-Llama-3-8B - sources: - layer_range: [16, 24] model: model: path: meta-llama/Meta-Llama-3-8B - sources: - layer_range: [16, 24] model: model: path: meta-llama/Meta-Llama-3-8B - sources: - layer_range: [24, 32] model: model: path: meta-llama/Meta-Llama-3-8B ```
{"license": "llama3", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["meta-llama/Meta-Llama-3-8B"]}
ChuGyouk/Llama-3-11.5B-iDUS-nocpt
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T15:23:15+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #mergekit #merge #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merged This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * meta-llama/Meta-Llama-3-8B ### Configuration The following YAML configuration was used to produce this model:
[ "# merged\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* meta-llama/Meta-Llama-3-8B", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merged\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* meta-llama/Meta-Llama-3-8B", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/BAAI/AquilaChat2-70B-Expr <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/AquilaChat2-70B-Expr-GGUF/resolve/main/AquilaChat2-70B-Expr.Q2_K.gguf) | Q2_K | 26.2 | | | [GGUF](https://huggingface.co/mradermacher/AquilaChat2-70B-Expr-GGUF/resolve/main/AquilaChat2-70B-Expr.IQ3_XS.gguf) | IQ3_XS | 29.1 | | | [GGUF](https://huggingface.co/mradermacher/AquilaChat2-70B-Expr-GGUF/resolve/main/AquilaChat2-70B-Expr.IQ3_S.gguf) | IQ3_S | 30.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/AquilaChat2-70B-Expr-GGUF/resolve/main/AquilaChat2-70B-Expr.Q3_K_S.gguf) | Q3_K_S | 30.7 | | | [GGUF](https://huggingface.co/mradermacher/AquilaChat2-70B-Expr-GGUF/resolve/main/AquilaChat2-70B-Expr.IQ3_M.gguf) | IQ3_M | 31.7 | | | [GGUF](https://huggingface.co/mradermacher/AquilaChat2-70B-Expr-GGUF/resolve/main/AquilaChat2-70B-Expr.Q3_K_M.gguf) | Q3_K_M | 34.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/AquilaChat2-70B-Expr-GGUF/resolve/main/AquilaChat2-70B-Expr.Q3_K_L.gguf) | Q3_K_L | 36.9 | | | [GGUF](https://huggingface.co/mradermacher/AquilaChat2-70B-Expr-GGUF/resolve/main/AquilaChat2-70B-Expr.IQ4_XS.gguf) | IQ4_XS | 38.1 | | | [GGUF](https://huggingface.co/mradermacher/AquilaChat2-70B-Expr-GGUF/resolve/main/AquilaChat2-70B-Expr.Q4_K_S.gguf) | Q4_K_S | 40.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AquilaChat2-70B-Expr-GGUF/resolve/main/AquilaChat2-70B-Expr.Q4_K_M.gguf) | Q4_K_M | 42.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AquilaChat2-70B-Expr-GGUF/resolve/main/AquilaChat2-70B-Expr.Q5_K_S.gguf) | Q5_K_S | 48.4 | | | [GGUF](https://huggingface.co/mradermacher/AquilaChat2-70B-Expr-GGUF/resolve/main/AquilaChat2-70B-Expr.Q5_K_M.gguf) | Q5_K_M | 49.7 | | | [PART 1](https://huggingface.co/mradermacher/AquilaChat2-70B-Expr-GGUF/resolve/main/AquilaChat2-70B-Expr.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/AquilaChat2-70B-Expr-GGUF/resolve/main/AquilaChat2-70B-Expr.Q6_K.gguf.part2of2) | Q6_K | 57.6 | very good quality | | [PART 1](https://huggingface.co/mradermacher/AquilaChat2-70B-Expr-GGUF/resolve/main/AquilaChat2-70B-Expr.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/AquilaChat2-70B-Expr-GGUF/resolve/main/AquilaChat2-70B-Expr.Q8_0.gguf.part2of2) | Q8_0 | 74.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "other", "library_name": "transformers", "base_model": "BAAI/AquilaChat2-70B-Expr", "quantized_by": "mradermacher"}
mradermacher/AquilaChat2-70B-Expr-GGUF
null
[ "transformers", "gguf", "en", "base_model:BAAI/AquilaChat2-70B-Expr", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:23:15+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-BAAI/AquilaChat2-70B-Expr #license-other #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-BAAI/AquilaChat2-70B-Expr #license-other #endpoints_compatible #region-us \n" ]
text-to-image
diffusers
### svankmajer Dreambooth model trained by howiejayz with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
{"license": "creativeml-openrail-m", "tags": ["text-to-image", "stable-diffusion"]}
howiejayz/svankmajer
null
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-23T15:23:56+00:00
[]
[]
TAGS #diffusers #text-to-image #stable-diffusion #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
### svankmajer Dreambooth model trained by howiejayz with TheLastBen's fast-DreamBooth notebook Test the concept via A1111 Colab fast-Colab-A1111 Sample pictures of this concept:
[ "### svankmajer Dreambooth model trained by howiejayz with TheLastBen's fast-DreamBooth notebook\n\n\nTest the concept via A1111 Colab fast-Colab-A1111\n\nSample pictures of this concept:" ]
[ "TAGS\n#diffusers #text-to-image #stable-diffusion #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "### svankmajer Dreambooth model trained by howiejayz with TheLastBen's fast-DreamBooth notebook\n\n\nTest the concept via A1111 Colab fast-Colab-A1111\n\nSample pictures of this concept:" ]
text-generation
transformers
## Model Summary The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + Phi-3 GGUF: [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf) + Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx) ## Intended Uses **Primary use cases** The model is intended for commercial and research use in English. The model provides uses for applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat). ### Chat Format Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|system|> You are a helpful AI assistant.<|end|> <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|system|> You are a helpful AI assistant.<|end|> <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model = AutoModelForCausalLM.from_pretrained( "microsoft/Phi-3-mini-4k-instruct", device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct") messages = [ {"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."}, {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 4K tokens * GPUs: 512 H100-80G * Training time: 7 days * Training data: 3.3T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. ### Datasets Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py). ## Benchmarks We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5. All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. | | Phi-3-Mini-4K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 | |---|---|---|---|---|---|---|---|---|---| | MMLU <br>5-Shot | 68.8 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 | | HellaSwag <br> 5-Shot | 76.7 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 | | ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 | | GSM-8K <br> 0-Shot; CoT | 82.5 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 | | MedQA <br> 2-Shot | 53.8 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 | | AGIEval <br> 0-Shot | 37.5 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 | | TriviaQA <br> 5-Shot | 64.0 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 | | Arc-C <br> 10-Shot | 84.9 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 | | Arc-E <br> 10-Shot | 94.6 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 | | PIQA <br> 5-Shot | 84.2 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 | | SociQA <br> 5-Shot | 76.6 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 | | BigBench-Hard <br> 0-Shot | 71.7 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 | | WinoGrande <br> 5-Shot | 70.8 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 | | OpenBookQA <br> 10-Shot | 83.2 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 | | BoolQ <br> 0-Shot | 77.6 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 | | CommonSenseQA <br> 10-Shot | 80.2 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 | | TruthfulQA <br> 10-Shot | 65.0 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 | | HumanEval <br> 0-Shot | 59.1 | 59.1 | 54.7 | 47.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 | | MBPP <br> 3-Shot | 53.8 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" * CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf) + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx) ## Cross Platform Support ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx). Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
{"language": ["en"], "license": "mit", "tags": ["nlp", "code"], "license_link": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation"}
jncraton/Phi-3-mini-4k-instruct-ct2-int8
null
[ "transformers", "nlp", "code", "text-generation", "conversational", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:24:00+00:00
[]
[ "en" ]
TAGS #transformers #nlp #code #text-generation #conversational #en #license-mit #endpoints_compatible #region-us
Model Summary ------------- The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters. Resources and Technical Documentation: * Phi-3 Microsoft Blog * Phi-3 Technical Report * Phi-3 on Azure AI Studio * Phi-3 GGUF: 4K * Phi-3 ONNX: 4K Intended Uses ------------- Primary use cases The model is intended for commercial and research use in English. The model provides uses for applications which require: 1. Memory/compute constrained environments 2. Latency bound scenarios 3. Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. Use case considerations Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. How to Use ---------- Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of 'transformers'. Until the official version is released through 'pip', ensure that you are doing one of the following: * When loading the model, ensure that 'trust\_remote\_code=True' is passed as an argument of the 'from\_pretrained()' function. * Update your local 'transformers' to the development version: 'pip uninstall -y transformers && pip install git+URL The previous command is an alternative to cloning and installing from the source. The current 'transformers' version can be verified with: 'pip list | grep transformers'. Phi-3 Mini-4K-Instruct is also available in HuggingChat. ### Chat Format Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: For example: where the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following: ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: Responsible AI Considerations ----------------------------- Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: * Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. * Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. * Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. * Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. * Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: * Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. * High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. * Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). * Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. * Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. Training -------- ### Model * Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 4K tokens * GPUs: 512 H100-80G * Training time: 7 days * Training data: 3.3T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. ### Datasets Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of 1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here. Benchmarks ---------- We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5. All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. Software -------- * PyTorch * DeepSpeed * Transformers * Flash-Attention Hardware -------- Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from\_pretrained() with attn\_implementation="eager" * CPU: use the GGUF quantized models 4K * Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K Cross Platform Support ---------------------- ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here. Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN License ------- The model is licensed under the MIT license. Trademarks ---------- This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
[ "### Chat Format\n\n\nGiven the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.\nYou can provide the prompt as a question with a generic template as follow:\n\n\nFor example:\n\n\nwhere the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:", "### Sample inference code\n\n\nThis code snippets show how to get quickly started with running the model on a GPU:\n\n\nResponsible AI Considerations\n-----------------------------\n\n\nLike other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:\n\n\n* Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.\n* Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.\n* Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.\n* Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.\n* Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n\nDevelopers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:\n\n\n* Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.\n* High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.\n* Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).\n* Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.\n* Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.\n\n\nTraining\n--------", "### Model\n\n\n* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.\n* Inputs: Text. It is best suited for prompts using chat format.\n* Context length: 4K tokens\n* GPUs: 512 H100-80G\n* Training time: 7 days\n* Training data: 3.3T tokens\n* Outputs: Generated text in response to the input\n* Dates: Our models were trained between February and April 2024\n* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.", "### Datasets\n\n\nOur training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of\n\n\n1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;\n2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);\n3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.", "### Fine-tuning\n\n\nA basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.\n\n\nBenchmarks\n----------\n\n\nWe report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.\n\n\nAll the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.\n\n\nAs is now standard, we use few-shot prompts to evaluate the models, at temperature 0.\nThe prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.\nMore specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.\n\n\nThe number of k–shot examples is listed per-benchmark.\n\n\n\nSoftware\n--------\n\n\n* PyTorch\n* DeepSpeed\n* Transformers\n* Flash-Attention\n\n\nHardware\n--------\n\n\nNote that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:\n\n\n* NVIDIA A100\n* NVIDIA A6000\n* NVIDIA H100\n\n\nIf you want to run the model on:\n\n\n* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from\\_pretrained() with attn\\_implementation=\"eager\"\n* CPU: use the GGUF quantized models 4K\n\n\n* Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K\n\n\nCross Platform Support\n----------------------\n\n\nONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.\n\n\nOptimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. \n\nAlong with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.\n\n\nHere are some of the optimized configurations we have added:\n\n\n1. ONNX models for int4 DML: Quantized to int4 via AWQ\n2. ONNX model for fp16 CUDA\n3. ONNX model for int4 CUDA: Quantized to int4 via RTN\n4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN\n\n\nLicense\n-------\n\n\nThe model is licensed under the MIT license.\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies." ]
[ "TAGS\n#transformers #nlp #code #text-generation #conversational #en #license-mit #endpoints_compatible #region-us \n", "### Chat Format\n\n\nGiven the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.\nYou can provide the prompt as a question with a generic template as follow:\n\n\nFor example:\n\n\nwhere the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:", "### Sample inference code\n\n\nThis code snippets show how to get quickly started with running the model on a GPU:\n\n\nResponsible AI Considerations\n-----------------------------\n\n\nLike other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:\n\n\n* Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.\n* Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.\n* Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.\n* Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.\n* Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n\nDevelopers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:\n\n\n* Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.\n* High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.\n* Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).\n* Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.\n* Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.\n\n\nTraining\n--------", "### Model\n\n\n* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.\n* Inputs: Text. It is best suited for prompts using chat format.\n* Context length: 4K tokens\n* GPUs: 512 H100-80G\n* Training time: 7 days\n* Training data: 3.3T tokens\n* Outputs: Generated text in response to the input\n* Dates: Our models were trained between February and April 2024\n* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.", "### Datasets\n\n\nOur training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of\n\n\n1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;\n2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);\n3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.", "### Fine-tuning\n\n\nA basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.\n\n\nBenchmarks\n----------\n\n\nWe report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.\n\n\nAll the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.\n\n\nAs is now standard, we use few-shot prompts to evaluate the models, at temperature 0.\nThe prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.\nMore specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.\n\n\nThe number of k–shot examples is listed per-benchmark.\n\n\n\nSoftware\n--------\n\n\n* PyTorch\n* DeepSpeed\n* Transformers\n* Flash-Attention\n\n\nHardware\n--------\n\n\nNote that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:\n\n\n* NVIDIA A100\n* NVIDIA A6000\n* NVIDIA H100\n\n\nIf you want to run the model on:\n\n\n* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from\\_pretrained() with attn\\_implementation=\"eager\"\n* CPU: use the GGUF quantized models 4K\n\n\n* Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K\n\n\nCross Platform Support\n----------------------\n\n\nONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.\n\n\nOptimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. \n\nAlong with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.\n\n\nHere are some of the optimized configurations we have added:\n\n\n1. ONNX models for int4 DML: Quantized to int4 via AWQ\n2. ONNX model for fp16 CUDA\n3. ONNX model for int4 CUDA: Quantized to int4 via RTN\n4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN\n\n\nLicense\n-------\n\n\nThe model is licensed under the MIT license.\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies." ]
automatic-speech-recognition
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
samuelchiji/mr_sam_wav2vec2_nigerian_accent_v3
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:28:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# 🦙 Llama-3-LlamaPlanner ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64676c81e7a6a374fd181110/qCz8e2WYIg3Lh9KRucAzb.jpeg) ## Model Description LlamaPlanner is a fine-tuned version of Meta's Llama-8B model which has been specifically designed for generating high-quality plans for code generation tasks. The model was trained on CodeNet-16k, a curated dataset of competitive programming problems, and their corresponding plans generated using Llama-3-70B. By leveraging the power of Parameter Efficient Fine-Tuning (PEFT), LlamaPlanner achieves performance comparable to much larger models in generating effective plans for code generation. ## Model Details - **Base Model:** Llama-8B Instruct - **Fine-Tuning Approach:** Parameter Efficient Fine-Tuning (PEFT) using Unsloth - **Training Data:** CodeNet-16k, a filtered and deduplicated dataset of 16,500 competitive programming problems and their plans generated using Llama-3-70B - **Training Infrastructure:** H100-SXM5 GPU - **Evaluation Benchmarks:** HumanEval and EvalPlus ## How to Use To use LlamaPlanner with the Hugging Face Transformers library, follow these steps: ```python import transformers import torch model_id = "verifiers-for-code/Llama-3-LlamaPlanner" pipeline = transformers.pipeline(     "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto" ) prompt = "Generate a plan for a program that sorts an array of integers in ascending order." pipeline(prompt) ``` ## Training Details LlamaPlanner was trained using the following steps: 1. Filtering and preprocessing the CodeNet dataset to create CodeNet-16k 2. Generating plans for each problem using Llama-3-70B 3. Formatting the problem description, input description, output description, and samples as input, and the generated plans as output 4. Performing PEFT on the Llama-8B Instruct base model using Unsloth with different ranks and alpha values 5. Training on an H100-SXM5 GPU for varying epochs ## Evaluation Results LlamaPlanner was evaluated on the HumanEval and EvalPlus benchmarks using various methods, including zero-shot, self-planning, base planner model, and fine-tuned planner model. The results demonstrated that LlamaPlanner outperforms the base Llama-3-8B model by 14% on HumanEval and 11% on EvalPlus. Additionally, plans generated by LlamaPlanner helped boost the performance of Llama-3-70B on HumanEval. ## Citation If you use LlamaPlanner in your research or applications, please cite the model using the following BibTeX entry: ```bibtex @misc{llamaplanner,   title={LlamaPlanner: A Fine-Tuned Llama-8B Model for Effective Plan Generation in Code Generation Tasks},   author={Abhinav Chinta and Sumuk Shashidhar and Vaibhav Sahai},   year={2023},   howpublished={\url{https://huggingface.co/verifiers-for-code/LlamaPlanner}}, } ``` ## License LlamaPlanner is released under the Apache License 2.0. ## Acknowledgements We would like to thank Meta for releasing the Llama model family and the open-source community for their contributions to the development of large language models and their applications in code generation tasks.
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["code"], "datasets": ["verifiers-for-code/CodeNet-16K", "verifiers-for-code/CodeNet-Planner"]}
verifiers-for-code/Llama-3-LlamaPlanner
null
[ "transformers", "safetensors", "llama", "text-generation", "code", "conversational", "en", "dataset:verifiers-for-code/CodeNet-16K", "dataset:verifiers-for-code/CodeNet-Planner", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T15:31:01+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #code #conversational #en #dataset-verifiers-for-code/CodeNet-16K #dataset-verifiers-for-code/CodeNet-Planner #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Llama-3-LlamaPlanner !image/jpeg ## Model Description LlamaPlanner is a fine-tuned version of Meta's Llama-8B model which has been specifically designed for generating high-quality plans for code generation tasks. The model was trained on CodeNet-16k, a curated dataset of competitive programming problems, and their corresponding plans generated using Llama-3-70B. By leveraging the power of Parameter Efficient Fine-Tuning (PEFT), LlamaPlanner achieves performance comparable to much larger models in generating effective plans for code generation. ## Model Details - Base Model: Llama-8B Instruct - Fine-Tuning Approach: Parameter Efficient Fine-Tuning (PEFT) using Unsloth - Training Data: CodeNet-16k, a filtered and deduplicated dataset of 16,500 competitive programming problems and their plans generated using Llama-3-70B - Training Infrastructure: H100-SXM5 GPU - Evaluation Benchmarks: HumanEval and EvalPlus ## How to Use To use LlamaPlanner with the Hugging Face Transformers library, follow these steps: ## Training Details LlamaPlanner was trained using the following steps: 1. Filtering and preprocessing the CodeNet dataset to create CodeNet-16k 2. Generating plans for each problem using Llama-3-70B 3. Formatting the problem description, input description, output description, and samples as input, and the generated plans as output 4. Performing PEFT on the Llama-8B Instruct base model using Unsloth with different ranks and alpha values 5. Training on an H100-SXM5 GPU for varying epochs ## Evaluation Results LlamaPlanner was evaluated on the HumanEval and EvalPlus benchmarks using various methods, including zero-shot, self-planning, base planner model, and fine-tuned planner model. The results demonstrated that LlamaPlanner outperforms the base Llama-3-8B model by 14% on HumanEval and 11% on EvalPlus. Additionally, plans generated by LlamaPlanner helped boost the performance of Llama-3-70B on HumanEval. If you use LlamaPlanner in your research or applications, please cite the model using the following BibTeX entry: ## License LlamaPlanner is released under the Apache License 2.0. ## Acknowledgements We would like to thank Meta for releasing the Llama model family and the open-source community for their contributions to the development of large language models and their applications in code generation tasks.
[ "# Llama-3-LlamaPlanner\n\n!image/jpeg", "## Model Description\n\nLlamaPlanner is a fine-tuned version of Meta's Llama-8B model which has been specifically designed for generating high-quality plans for code generation tasks. The model was trained on CodeNet-16k, a curated dataset of competitive programming problems, and their corresponding plans generated using Llama-3-70B. By leveraging the power of Parameter Efficient Fine-Tuning (PEFT), LlamaPlanner achieves performance comparable to much larger models in generating effective plans for code generation.", "## Model Details\n\n- Base Model: Llama-8B Instruct\n- Fine-Tuning Approach: Parameter Efficient Fine-Tuning (PEFT) using Unsloth\n- Training Data: CodeNet-16k, a filtered and deduplicated dataset of 16,500 competitive programming problems and their plans generated using Llama-3-70B\n- Training Infrastructure: H100-SXM5 GPU\n- Evaluation Benchmarks: HumanEval and EvalPlus", "## How to Use\n\nTo use LlamaPlanner with the Hugging Face Transformers library, follow these steps:", "## Training Details\n\nLlamaPlanner was trained using the following steps:\n\n1. Filtering and preprocessing the CodeNet dataset to create CodeNet-16k\n2. Generating plans for each problem using Llama-3-70B\n3. Formatting the problem description, input description, output description, and samples as input, and the generated plans as output\n4. Performing PEFT on the Llama-8B Instruct base model using Unsloth with different ranks and alpha values\n5. Training on an H100-SXM5 GPU for varying epochs", "## Evaluation Results\n\nLlamaPlanner was evaluated on the HumanEval and EvalPlus benchmarks using various methods, including zero-shot, self-planning, base planner model, and fine-tuned planner model. The results demonstrated that LlamaPlanner outperforms the base Llama-3-8B model by 14% on HumanEval and 11% on EvalPlus. Additionally, plans generated by LlamaPlanner helped boost the performance of Llama-3-70B on HumanEval.\n\nIf you use LlamaPlanner in your research or applications, please cite the model using the following BibTeX entry:", "## License\n\nLlamaPlanner is released under the Apache License 2.0.", "## Acknowledgements\n\nWe would like to thank Meta for releasing the Llama model family and the open-source community for their contributions to the development of large language models and their applications in code generation tasks." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #code #conversational #en #dataset-verifiers-for-code/CodeNet-16K #dataset-verifiers-for-code/CodeNet-Planner #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Llama-3-LlamaPlanner\n\n!image/jpeg", "## Model Description\n\nLlamaPlanner is a fine-tuned version of Meta's Llama-8B model which has been specifically designed for generating high-quality plans for code generation tasks. The model was trained on CodeNet-16k, a curated dataset of competitive programming problems, and their corresponding plans generated using Llama-3-70B. By leveraging the power of Parameter Efficient Fine-Tuning (PEFT), LlamaPlanner achieves performance comparable to much larger models in generating effective plans for code generation.", "## Model Details\n\n- Base Model: Llama-8B Instruct\n- Fine-Tuning Approach: Parameter Efficient Fine-Tuning (PEFT) using Unsloth\n- Training Data: CodeNet-16k, a filtered and deduplicated dataset of 16,500 competitive programming problems and their plans generated using Llama-3-70B\n- Training Infrastructure: H100-SXM5 GPU\n- Evaluation Benchmarks: HumanEval and EvalPlus", "## How to Use\n\nTo use LlamaPlanner with the Hugging Face Transformers library, follow these steps:", "## Training Details\n\nLlamaPlanner was trained using the following steps:\n\n1. Filtering and preprocessing the CodeNet dataset to create CodeNet-16k\n2. Generating plans for each problem using Llama-3-70B\n3. Formatting the problem description, input description, output description, and samples as input, and the generated plans as output\n4. Performing PEFT on the Llama-8B Instruct base model using Unsloth with different ranks and alpha values\n5. Training on an H100-SXM5 GPU for varying epochs", "## Evaluation Results\n\nLlamaPlanner was evaluated on the HumanEval and EvalPlus benchmarks using various methods, including zero-shot, self-planning, base planner model, and fine-tuned planner model. The results demonstrated that LlamaPlanner outperforms the base Llama-3-8B model by 14% on HumanEval and 11% on EvalPlus. Additionally, plans generated by LlamaPlanner helped boost the performance of Llama-3-70B on HumanEval.\n\nIf you use LlamaPlanner in your research or applications, please cite the model using the following BibTeX entry:", "## License\n\nLlamaPlanner is released under the Apache License 2.0.", "## Acknowledgements\n\nWe would like to thank Meta for releasing the Llama model family and the open-source community for their contributions to the development of large language models and their applications in code generation tasks." ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pandafm/donut-vf3
null
[ "transformers", "safetensors", "vision-encoder-decoder", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:33:12+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # git-base-coco-scamper4 This model is a fine-tuned version of [microsoft/git-base-coco](https://huggingface.co/microsoft/git-base-coco) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "microsoft/git-base-coco", "model-index": [{"name": "git-base-coco-scamper4", "results": []}]}
Phuree/git-base-coco-scamper4
null
[ "transformers", "safetensors", "git", "text-generation", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/git-base-coco", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:33:30+00:00
[]
[]
TAGS #transformers #safetensors #git #text-generation #generated_from_trainer #dataset-imagefolder #base_model-microsoft/git-base-coco #license-mit #autotrain_compatible #endpoints_compatible #region-us
# git-base-coco-scamper4 This model is a fine-tuned version of microsoft/git-base-coco on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# git-base-coco-scamper4\n\nThis model is a fine-tuned version of microsoft/git-base-coco on the imagefolder dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #git #text-generation #generated_from_trainer #dataset-imagefolder #base_model-microsoft/git-base-coco #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# git-base-coco-scamper4\n\nThis model is a fine-tuned version of microsoft/git-base-coco on the imagefolder dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="mbartholet/taxi_qlearn", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "taxi_qlearn", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]}
mbartholet/taxi_qlearn
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-23T15:33:56+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "microsoft/phi-2"}
nk555/phi-2-experiment_ppo_quantized_600
null
[ "peft", "pytorch", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "region:us" ]
null
2024-04-23T15:34:00+00:00
[ "1910.09700" ]
[]
TAGS #peft #pytorch #safetensors #arxiv-1910.09700 #base_model-microsoft/phi-2 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #pytorch #safetensors #arxiv-1910.09700 #base_model-microsoft/phi-2 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
text-generation
transformers
# This is a clone of the repository from [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) For more information about this LLM model, please go to [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct).
{"language": ["en"], "license": "mit", "tags": ["phi3", "phi-3"]}
NotAiLOL/Microsoft-Phi-3-mini-128k-Instruct
null
[ "transformers", "safetensors", "phi3", "text-generation", "phi-3", "conversational", "custom_code", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:35:04+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #phi3 #text-generation #phi-3 #conversational #custom_code #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
# This is a clone of the repository from microsoft/Phi-3-mini-128k-instruct For more information about this LLM model, please go to microsoft/Phi-3-mini-128k-instruct.
[ "# This is a clone of the repository from microsoft/Phi-3-mini-128k-instruct\nFor more information about this LLM model, please go to microsoft/Phi-3-mini-128k-instruct." ]
[ "TAGS\n#transformers #safetensors #phi3 #text-generation #phi-3 #conversational #custom_code #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# This is a clone of the repository from microsoft/Phi-3-mini-128k-instruct\nFor more information about this LLM model, please go to microsoft/Phi-3-mini-128k-instruct." ]
text-generation
transformers
# macadeliccc/gemma-orchid-7b-dpo AWQ - Model creator: [macadeliccc](https://huggingface.co/macadeliccc) - Original model: [gemma-orchid-7b-dpo](https://huggingface.co/macadeliccc/gemma-orchid-7b-dpo) ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/6455cc8d679315e4ef16fbec/7pqiroePJW0WWm6JxwBoO.webp) ## model Summary [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) </div> This model is the second checkpoint of a future project. Its capable of function calling as well as having a strong base in communicational skills. This model has been finetuned on roughly 80k samples so far. ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/gemma-orchid-7b-dpo-AWQ" system_message = "You are gemma-orchid-7b-dpo, incarnated as a powerful AI. You were created by macadeliccc." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
{"license": "other", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "datasets": ["Thermostatic/flowers", "jondurbin/truthy-dpo-v0.1", "Intel/orca_dpo_pairs", "glaiveai/glaive-function-calling-v2"], "license_name": "gemma-terms-of-use", "license_link": "https://ai.google.dev/gemma/terms", "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious", "model-index": [{"name": "gemma-orchid-7b-dpo", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 62.88, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 80.95, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 61.41, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 53.27}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 77.51, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 50.19, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo", "name": "Open LLM Leaderboard"}}]}]}
solidrust/gemma-orchid-7b-dpo-AWQ
null
[ "transformers", "safetensors", "gemma", "text-generation", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "dataset:Thermostatic/flowers", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:Intel/orca_dpo_pairs", "dataset:glaiveai/glaive-function-calling-v2", "license:other", "model-index", "text-generation-inference", "region:us" ]
null
2024-04-23T15:36:07+00:00
[]
[]
TAGS #transformers #safetensors #gemma #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #dataset-Thermostatic/flowers #dataset-jondurbin/truthy-dpo-v0.1 #dataset-Intel/orca_dpo_pairs #dataset-glaiveai/glaive-function-calling-v2 #license-other #model-index #text-generation-inference #region-us
# macadeliccc/gemma-orchid-7b-dpo AWQ - Model creator: macadeliccc - Original model: gemma-orchid-7b-dpo !image/webp ## model Summary <img src="URL alt="Built with Axolotl" width="200" height="32"/> </div> This model is the second checkpoint of a future project. Its capable of function calling as well as having a strong base in communicational skills. This model has been finetuned on roughly 80k samples so far. ## How to use ### Install the necessary packages ### Example Python code ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - Text Generation Webui - using Loader: AutoAWQ - vLLM - version 0.2.2 or later for support for all model types. - Hugging Face Text Generation Inference (TGI) - Transformers version 4.35.0 and later, from any code or client that supports Transformers - AutoAWQ - for use from Python code
[ "# macadeliccc/gemma-orchid-7b-dpo AWQ\n\n- Model creator: macadeliccc\n- Original model: gemma-orchid-7b-dpo\n\n!image/webp", "## model Summary\n\n<img src=\"URL alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>\n</div>\n\nThis model is the second checkpoint of a future project. Its capable of function calling as well as having a strong base in communicational skills.\n\nThis model has been finetuned on roughly 80k samples so far.", "## How to use", "### Install the necessary packages", "### Example Python code", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #dataset-Thermostatic/flowers #dataset-jondurbin/truthy-dpo-v0.1 #dataset-Intel/orca_dpo_pairs #dataset-glaiveai/glaive-function-calling-v2 #license-other #model-index #text-generation-inference #region-us \n", "# macadeliccc/gemma-orchid-7b-dpo AWQ\n\n- Model creator: macadeliccc\n- Original model: gemma-orchid-7b-dpo\n\n!image/webp", "## model Summary\n\n<img src=\"URL alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>\n</div>\n\nThis model is the second checkpoint of a future project. Its capable of function calling as well as having a strong base in communicational skills.\n\nThis model has been finetuned on roughly 80k samples so far.", "## How to use", "### Install the necessary packages", "### Example Python code", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
DinoTheLewis/EVEE-Instruct-Interior-10.8B
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T15:37:07+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Jozaita/fine_tune_test_2
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:38:58+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
This is a Mixtral-7B fine-tuned model for the AutoTx-CrewAI version
{}
Superoisesuki/AutoTx_Mistral_7B_CrewAI
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T15:39:19+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
This is a Mixtral-7B fine-tuned model for the AutoTx-CrewAI version
[]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
null
transformers
# FinLang/finance-chat-model-investopedia <!-- Provide a quick summary of what the model is/does. --> This Large Language Model (LLM) is an instruct fine-tuned version of the mistralai/Mistral-7B-v0.1 using our open-sourced finance dataset https://huggingface.co/datasets/FinLang/investopedia-instruction-tuning-dataset developed for finance application by FinLang Team This project is for research purposes only. Third-party datasets may be subject to additional terms and conditions under their associated licenses. # Plans The research paper will be published soon. We are working on a v2 version of the model where we are increasing the training corpus of financial data and using improved techniques for training models. ## How to Get Started with the Model <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> import torch from peft import AutoPeftModelForCausalLM from transformers import AutoTokenizer, pipeline model_id='FinLang/investopedia_chat_model' model = AutoPeftModelForCausalLM.from_pretrained( model_id, device_map="auto", torch_dtype=torch.float16 ) tokenizer = AutoTokenizer.from_pretrained(model_id) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) example = [{'content': 'You are a financial expert and you can answer any questions related to finance. You will be given a context and a question. Understand the given context and\n try to answer. Users will ask you questions in English and you will generate answer based on the provided CONTEXT.\n CONTEXT:\n D. in Forced Migration from the University of the Witwatersrand (Wits) in Johannesburg, South Africa; A postgraduate diploma in Folklore & Cultural Studies at Indira Gandhi National Open University (IGNOU) in New Delhi, India; A Masters of International Affairs at Columbia University; A BA from Barnard College at Columbia University\n', 'role': 'system'}, {'content': ' In which universities did the individual obtain their academic qualifications?\n', 'role': 'user'}, {'content': ' University of the Witwatersrand (Wits) in Johannesburg, South Africa; Indira Gandhi National Open University (IGNOU) in New Delhi, India; Columbia University; Barnard College at Columbia University.', 'role': 'assistant'}] prompt = pipe.tokenizer.apply_chat_template(example[:2], tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.1, top_k=50, top_p=0.1, eos_token_id=pipe.tokenizer.eos_token_id, pad_token_id=pipe.tokenizer.pad_token_id) print(f"Query:\n{example[1]['content']}") print(f"Context:\n{example[0]['content']}") print(f"Original Answer:\n{example[2]['content']}") print(f"Generated Answer:\n{outputs[0]['generated_text'][len(prompt):].strip()}") ## Training Details Peft Config : { 'Technqiue' : 'QLORA', 'rank': 256, 'target_modules' : ["q_proj", "k_proj", "v_proj", "o_proj","gate_proj", "up_proj", "down_proj",], 'lora_alpha' : 128, 'lora_dropout' : 0, 'bias': "none", } Hyperparameters: { "epochs": 3, "evaluation_strategy": "epoch", "gradient_checkpointing": True, "max_grad_norm" : 0.3, "optimizer" : "adamw_torch_fused", "learning_rate" : 2e-4, "lr_scheduler_type": "constant", "warmup_ratio" : 0.03, "per_device_train_batch_size" : 8, "per_device_eval_batch_size" : 8, "gradient_accumulation_steps" : 4 } ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> We evaluated the model on test set (22.9k records) of https://huggingface.co/datasets/FinLang/investopedia-instruction-tuning-dataset. Evaluation was done using Proprietary LLM as judge on four criteria Correctness, Faithfullness, Clarity, Completeness on scale of 1-5 (1 being worst & 5 being best). Model got an average score of 4.58 out of 5. Human Evaluation was performed on random sample of 10k records and we found approx 80% aligment between human & Proprietary LLM. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> This model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## License Since non-commercial datasets are used for fine-tuning, we release this model as cc-by-nc-4.0. ## Citation [coming soon] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
{"license": "cc-by-nc-4.0", "library_name": "transformers"}
FinLang/finance-chat-model-investopedia
null
[ "transformers", "safetensors", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:39:31+00:00
[]
[]
TAGS #transformers #safetensors #license-cc-by-nc-4.0 #endpoints_compatible #region-us
# FinLang/finance-chat-model-investopedia This Large Language Model (LLM) is an instruct fine-tuned version of the mistralai/Mistral-7B-v0.1 using our open-sourced finance dataset URL developed for finance application by FinLang Team This project is for research purposes only. Third-party datasets may be subject to additional terms and conditions under their associated licenses. # Plans The research paper will be published soon. We are working on a v2 version of the model where we are increasing the training corpus of financial data and using improved techniques for training models. ## How to Get Started with the Model import torch from peft import AutoPeftModelForCausalLM from transformers import AutoTokenizer, pipeline model_id='FinLang/investopedia_chat_model' model = AutoPeftModelForCausalLM.from_pretrained( model_id, device_map="auto", torch_dtype=torch.float16 ) tokenizer = AutoTokenizer.from_pretrained(model_id) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) example = [{'content': 'You are a financial expert and you can answer any questions related to finance. You will be given a context and a question. Understand the given context and\n try to answer. Users will ask you questions in English and you will generate answer based on the provided CONTEXT.\n CONTEXT:\n D. in Forced Migration from the University of the Witwatersrand (Wits) in Johannesburg, South Africa; A postgraduate diploma in Folklore & Cultural Studies at Indira Gandhi National Open University (IGNOU) in New Delhi, India; A Masters of International Affairs at Columbia University; A BA from Barnard College at Columbia University\n', 'role': 'system'}, {'content': ' In which universities did the individual obtain their academic qualifications?\n', 'role': 'user'}, {'content': ' University of the Witwatersrand (Wits) in Johannesburg, South Africa; Indira Gandhi National Open University (IGNOU) in New Delhi, India; Columbia University; Barnard College at Columbia University.', 'role': 'assistant'}] prompt = pipe.tokenizer.apply_chat_template(example[:2], tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.1, top_k=50, top_p=0.1, eos_token_id=pipe.tokenizer.eos_token_id, pad_token_id=pipe.tokenizer.pad_token_id) print(f"Query:\n{example[1]['content']}") print(f"Context:\n{example[0]['content']}") print(f"Original Answer:\n{example[2]['content']}") print(f"Generated Answer:\n{outputs[0]['generated_text'][len(prompt):].strip()}") ## Training Details Peft Config : { 'Technqiue' : 'QLORA', 'rank': 256, 'target_modules' : ["q_proj", "k_proj", "v_proj", "o_proj","gate_proj", "up_proj", "down_proj",], 'lora_alpha' : 128, 'lora_dropout' : 0, 'bias': "none", } Hyperparameters: { "epochs": 3, "evaluation_strategy": "epoch", "gradient_checkpointing": True, "max_grad_norm" : 0.3, "optimizer" : "adamw_torch_fused", "learning_rate" : 2e-4, "lr_scheduler_type": "constant", "warmup_ratio" : 0.03, "per_device_train_batch_size" : 8, "per_device_eval_batch_size" : 8, "gradient_accumulation_steps" : 4 } ## Evaluation We evaluated the model on test set (22.9k records) of URL Evaluation was done using Proprietary LLM as judge on four criteria Correctness, Faithfullness, Clarity, Completeness on scale of 1-5 (1 being worst & 5 being best). Model got an average score of 4.58 out of 5. Human Evaluation was performed on random sample of 10k records and we found approx 80% aligment between human & Proprietary LLM. ## Bias, Risks, and Limitations This model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## License Since non-commercial datasets are used for fine-tuning, we release this model as cc-by-nc-4.0. [coming soon]
[ "# FinLang/finance-chat-model-investopedia\n\n\nThis Large Language Model (LLM) is an instruct fine-tuned version of the mistralai/Mistral-7B-v0.1 using our open-sourced finance dataset URL developed for finance application by FinLang Team\n\nThis project is for research purposes only. Third-party datasets may be subject to additional terms and conditions under their associated licenses.", "# Plans\n\n The research paper will be published soon.\n We are working on a v2 version of the model where we are increasing the training corpus of financial data and using improved techniques for training models.", "## How to Get Started with the Model\n\n\nimport torch\n\nfrom peft import AutoPeftModelForCausalLM\n\nfrom transformers import AutoTokenizer, pipeline\n\nmodel_id='FinLang/investopedia_chat_model'\n\nmodel = AutoPeftModelForCausalLM.from_pretrained(\n model_id,\n device_map=\"auto\",\n torch_dtype=torch.float16\n)\n\ntokenizer = AutoTokenizer.from_pretrained(model_id)\n\npipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer)\n\nexample = [{'content': 'You are a financial expert and you can answer any questions related to finance. You will be given a context and a question. Understand the given context and\\n try to answer. Users will ask you questions in English and you will generate answer based on the provided CONTEXT.\\n CONTEXT:\\n D. in Forced Migration from the University of the Witwatersrand (Wits) in Johannesburg, South Africa; A postgraduate diploma in Folklore & Cultural Studies at Indira Gandhi National Open University (IGNOU) in New Delhi, India; A Masters of International Affairs at Columbia University; A BA from Barnard College at Columbia University\\n', 'role': 'system'}, {'content': ' In which universities did the individual obtain their academic qualifications?\\n', 'role': 'user'}, {'content': ' University of the Witwatersrand (Wits) in Johannesburg, South Africa; Indira Gandhi National Open University (IGNOU) in New Delhi, India; Columbia University; Barnard College at Columbia University.', 'role': 'assistant'}]\n\nprompt = pipe.tokenizer.apply_chat_template(example[:2], tokenize=False, add_generation_prompt=True)\n\noutputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.1, top_k=50, top_p=0.1, eos_token_id=pipe.tokenizer.eos_token_id, pad_token_id=pipe.tokenizer.pad_token_id)\n\nprint(f\"Query:\\n{example[1]['content']}\")\n\nprint(f\"Context:\\n{example[0]['content']}\")\n\nprint(f\"Original Answer:\\n{example[2]['content']}\")\n\nprint(f\"Generated Answer:\\n{outputs[0]['generated_text'][len(prompt):].strip()}\")", "## Training Details\n\nPeft Config :\n\n{\n 'Technqiue' : 'QLORA',\n \n 'rank': 256,\n \n 'target_modules' : [\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\",\"gate_proj\", \"up_proj\", \"down_proj\",],\n \n 'lora_alpha' : 128,\n \n 'lora_dropout' : 0, \n \n 'bias': \"none\", \n\n}\n \nHyperparameters:\n\n{\n \"epochs\": 3,\n \n \"evaluation_strategy\": \"epoch\",\n \n \"gradient_checkpointing\": True,\n \n \"max_grad_norm\" : 0.3,\n \n \"optimizer\" : \"adamw_torch_fused\",\n \n \"learning_rate\" : 2e-4,\n \n \"lr_scheduler_type\": \"constant\",\n \n \"warmup_ratio\" : 0.03,\n \n \"per_device_train_batch_size\" : 8, \n \n \"per_device_eval_batch_size\" : 8,\n \n \"gradient_accumulation_steps\" : 4\n\n}", "## Evaluation\n\n\nWe evaluated the model on test set (22.9k records) of URL Evaluation was done using Proprietary LLM as judge on four criteria Correctness, Faithfullness, Clarity, Completeness on scale of 1-5 (1 being worst & 5 being best). Model got an average score of 4.58 out of 5.\nHuman Evaluation was performed on random sample of 10k records and we found approx 80% aligment between human & Proprietary LLM.", "## Bias, Risks, and Limitations\n\n\nThis model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.", "## License\n\nSince non-commercial datasets are used for fine-tuning, we release this model as cc-by-nc-4.0.\n\n[coming soon]" ]
[ "TAGS\n#transformers #safetensors #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n", "# FinLang/finance-chat-model-investopedia\n\n\nThis Large Language Model (LLM) is an instruct fine-tuned version of the mistralai/Mistral-7B-v0.1 using our open-sourced finance dataset URL developed for finance application by FinLang Team\n\nThis project is for research purposes only. Third-party datasets may be subject to additional terms and conditions under their associated licenses.", "# Plans\n\n The research paper will be published soon.\n We are working on a v2 version of the model where we are increasing the training corpus of financial data and using improved techniques for training models.", "## How to Get Started with the Model\n\n\nimport torch\n\nfrom peft import AutoPeftModelForCausalLM\n\nfrom transformers import AutoTokenizer, pipeline\n\nmodel_id='FinLang/investopedia_chat_model'\n\nmodel = AutoPeftModelForCausalLM.from_pretrained(\n model_id,\n device_map=\"auto\",\n torch_dtype=torch.float16\n)\n\ntokenizer = AutoTokenizer.from_pretrained(model_id)\n\npipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer)\n\nexample = [{'content': 'You are a financial expert and you can answer any questions related to finance. You will be given a context and a question. Understand the given context and\\n try to answer. Users will ask you questions in English and you will generate answer based on the provided CONTEXT.\\n CONTEXT:\\n D. in Forced Migration from the University of the Witwatersrand (Wits) in Johannesburg, South Africa; A postgraduate diploma in Folklore & Cultural Studies at Indira Gandhi National Open University (IGNOU) in New Delhi, India; A Masters of International Affairs at Columbia University; A BA from Barnard College at Columbia University\\n', 'role': 'system'}, {'content': ' In which universities did the individual obtain their academic qualifications?\\n', 'role': 'user'}, {'content': ' University of the Witwatersrand (Wits) in Johannesburg, South Africa; Indira Gandhi National Open University (IGNOU) in New Delhi, India; Columbia University; Barnard College at Columbia University.', 'role': 'assistant'}]\n\nprompt = pipe.tokenizer.apply_chat_template(example[:2], tokenize=False, add_generation_prompt=True)\n\noutputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.1, top_k=50, top_p=0.1, eos_token_id=pipe.tokenizer.eos_token_id, pad_token_id=pipe.tokenizer.pad_token_id)\n\nprint(f\"Query:\\n{example[1]['content']}\")\n\nprint(f\"Context:\\n{example[0]['content']}\")\n\nprint(f\"Original Answer:\\n{example[2]['content']}\")\n\nprint(f\"Generated Answer:\\n{outputs[0]['generated_text'][len(prompt):].strip()}\")", "## Training Details\n\nPeft Config :\n\n{\n 'Technqiue' : 'QLORA',\n \n 'rank': 256,\n \n 'target_modules' : [\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\",\"gate_proj\", \"up_proj\", \"down_proj\",],\n \n 'lora_alpha' : 128,\n \n 'lora_dropout' : 0, \n \n 'bias': \"none\", \n\n}\n \nHyperparameters:\n\n{\n \"epochs\": 3,\n \n \"evaluation_strategy\": \"epoch\",\n \n \"gradient_checkpointing\": True,\n \n \"max_grad_norm\" : 0.3,\n \n \"optimizer\" : \"adamw_torch_fused\",\n \n \"learning_rate\" : 2e-4,\n \n \"lr_scheduler_type\": \"constant\",\n \n \"warmup_ratio\" : 0.03,\n \n \"per_device_train_batch_size\" : 8, \n \n \"per_device_eval_batch_size\" : 8,\n \n \"gradient_accumulation_steps\" : 4\n\n}", "## Evaluation\n\n\nWe evaluated the model on test set (22.9k records) of URL Evaluation was done using Proprietary LLM as judge on four criteria Correctness, Faithfullness, Clarity, Completeness on scale of 1-5 (1 being worst & 5 being best). Model got an average score of 4.58 out of 5.\nHuman Evaluation was performed on random sample of 10k records and we found approx 80% aligment between human & Proprietary LLM.", "## Bias, Risks, and Limitations\n\n\nThis model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.", "## License\n\nSince non-commercial datasets are used for fine-tuning, we release this model as cc-by-nc-4.0.\n\n[coming soon]" ]
object-detection
ultralytics
<div align="center"> <img width="640" alt="chanelcolgate/chamdiemgianhang-vsk-v4" src="https://huggingface.co/chanelcolgate/chamdiemgianhang-vsk-v4/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['BOM_GEN', 'BOM_JUN', 'BOM_KID', 'BOM_SAC', 'BOM_VTG', 'BOM_YTV', 'HOP_FEJ', 'HOP_FRE', 'HOP_JUN', 'HOP_POC', 'HOP_VTG', 'HOP_YTV', 'LOC_JUN', 'LOC_KID', 'LOC_YTV', 'LOO_DAU', 'LOO_KID', 'LOO_MAM', 'LOO_YTV', 'POS_LON', 'POS_NHO', 'POS_THA', 'TUI_GEN', 'TUI_JUN', 'TUI_KID', 'TUI_SAC', 'TUI_THV', 'TUI_THX', 'TUI_VTG', 'TUI_YTV'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.1.0 ultralytics==8.0.239 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('chanelcolgate/chamdiemgianhang-vsk-v4') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ```
{"library_name": "ultralytics", "tags": ["ultralyticsplus", "yolov8", "ultralytics", "yolo", "vision", "object-detection", "pytorch"], "datasets": ["chanelcolgate/yenthienviet"], "library_version": "8.0.239", "inference": false, "model-index": [{"name": "chanelcolgate/chamdiemgianhang-vsk-v4", "results": [{"task": {"type": "object-detection"}, "dataset": {"name": "yenthienviet", "type": "chanelcolgate/yenthienviet", "split": "validation"}, "metrics": [{"type": "precision", "value": 0.99425, "name": "[email protected](box)"}]}]}]}
chanelcolgate/chamdiemgianhang-vsk-v4
null
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "dataset:chanelcolgate/yenthienviet", "model-index", "has_space", "region:us" ]
null
2024-04-23T15:39:37+00:00
[]
[]
TAGS #ultralytics #tensorboard #v8 #ultralyticsplus #yolov8 #yolo #vision #object-detection #pytorch #dataset-chanelcolgate/yenthienviet #model-index #has_space #region-us
<div align="center"> <img width="640" alt="chanelcolgate/chamdiemgianhang-vsk-v4" src="URL </div> ### Supported Labels ### How to use - Install ultralyticsplus: - Load model and perform prediction:
[ "### Supported Labels", "### How to use\n\n- Install ultralyticsplus:\n\n\n\n- Load model and perform prediction:" ]
[ "TAGS\n#ultralytics #tensorboard #v8 #ultralyticsplus #yolov8 #yolo #vision #object-detection #pytorch #dataset-chanelcolgate/yenthienviet #model-index #has_space #region-us \n", "### Supported Labels", "### How to use\n\n- Install ultralyticsplus:\n\n\n\n- Load model and perform prediction:" ]
summarization
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mymt5-small-test This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 16.9555 - Rouge1: 5.3063 - Rouge2: 0.3834 - Rougel: 4.7129 - Rougelsum: 4.769 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 32.0529 | 1.0 | 10 | 22.9068 | 4.148 | 0.2063 | 4.0534 | 4.0281 | | 26.8483 | 2.0 | 20 | 20.5579 | 4.2091 | 0.2815 | 4.2287 | 4.2414 | | 26.3936 | 3.0 | 30 | 19.4139 | 4.199 | 0.2051 | 4.1823 | 4.1637 | | 24.8239 | 4.0 | 40 | 18.3165 | 4.2308 | 0.2812 | 4.2404 | 4.2749 | | 24.0505 | 5.0 | 50 | 17.3909 | 4.9556 | 0.486 | 4.6229 | 4.6138 | | 23.8294 | 6.0 | 60 | 17.0988 | 5.4206 | 0.5003 | 4.7981 | 4.7944 | | 22.7513 | 7.0 | 70 | 16.9862 | 5.3119 | 0.3966 | 4.814 | 4.7785 | | 22.836 | 8.0 | 80 | 16.9555 | 5.393 | 0.3829 | 4.7334 | 4.8031 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "metrics": ["rouge"], "base_model": "google/mt5-small", "model-index": [{"name": "Mymt5-small-test", "results": []}]}
thabat/Mymt5-small-test
null
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "summarization", "generated_from_trainer", "base_model:google/mt5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T15:40:43+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #mt5 #text2text-generation #summarization #generated_from_trainer #base_model-google/mt5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Mymt5-small-test ================ This model is a fine-tuned version of google/mt5-small on the None dataset. It achieves the following results on the evaluation set: * Loss: 16.9555 * Rouge1: 5.3063 * Rouge2: 0.3834 * Rougel: 4.7129 * Rougelsum: 4.769 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5.6e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 8 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #mt5 #text2text-generation #summarization #generated_from_trainer #base_model-google/mt5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# KnutJaegersberg/Llama3-Deita-8b AWQ - Model creator: [KnutJaegersberg](https://huggingface.co/KnutJaegersberg) - Original model: [Llama3-Deita-8b](https://huggingface.co/KnutJaegersberg/Llama3-Deita-8b) ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/Llama3-Deita-8b-AWQ" system_message = "You are Llama3-Deita-8b, incarnated as a powerful AI. You were created by KnutJaegersberg." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ ### System: {system_message} ### User: {prompt} ### Assistant: """ prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
{"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
solidrust/Llama3-Deita-8b-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "conversational", "text-generation-inference", "region:us" ]
null
2024-04-23T15:42:44+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #text-generation-inference #region-us
# KnutJaegersberg/Llama3-Deita-8b AWQ - Model creator: KnutJaegersberg - Original model: Llama3-Deita-8b ## How to use ### Install the necessary packages ### Example Python code ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - Text Generation Webui - using Loader: AutoAWQ - vLLM - version 0.2.2 or later for support for all model types. - Hugging Face Text Generation Inference (TGI) - Transformers version 4.35.0 and later, from any code or client that supports Transformers - AutoAWQ - for use from Python code
[ "# KnutJaegersberg/Llama3-Deita-8b AWQ\n\n- Model creator: KnutJaegersberg\n- Original model: Llama3-Deita-8b", "## How to use", "### Install the necessary packages", "### Example Python code", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #text-generation-inference #region-us \n", "# KnutJaegersberg/Llama3-Deita-8b AWQ\n\n- Model creator: KnutJaegersberg\n- Original model: Llama3-Deita-8b", "## How to use", "### Install the necessary packages", "### Example Python code", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code" ]
null
transformers
# Uploaded model - **Developed by:** K00B404 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
K00B404/llama3_8B_python_tuned_90steps_lora
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:44:00+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: K00B404 - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: K00B404\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: K00B404\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-to-image
diffusers
# Daubrez_Painterly <Gallery /> ## Model description Daubrez Painterly Style Trained on 32 recent images from renowned &quot;AI thief&quot; (his words, not mine) Henry Daubrez, with no permission asked. This LoRA produces excellent painterly images that trend toward surreal and abstract with beautiful textures and expressive swirls. Images were captioned via GPTV and edited for best practices. Training was done using the prodigy optimizer for 40 epochs with a batch size of 4 and a gradient accumulation of 4. Seems to work well with a variety of models and schedulers. Make sure to follow @henrydaubrez on X to see more of his excellent original work. ## Trigger words You should use `painterly style` to trigger the image generation. You should use `surreal` to trigger the image generation. You should use `abstract` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/BlaireSilver13/Daubrez_Painterly/tree/main) them in the Files & versions tab.
{"license": "artistic-2.0", "tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "painterly style, surreal, SUBJECT HERE, rich colors, detailed textures, micro detailed brush strokes, enchanting", "parameters": {"negative_prompt": "low quality, noise, dithering, ugly, disfigured"}, "output": {"url": "images/painterly-_00071_.png"}}, {"text": "painterly style, surreal, SUBJECT HERE, rich colors, detailed textures, micro detailed brush strokes, enchanting", "parameters": {"negative_prompt": "low quality, noise, dithering, ugly, disfigured"}, "output": {"url": "images/painterly-_00068_.png"}}, {"text": "painterly style, surreal, SUBJECT HERE, rich colors, detailed textures, micro detailed brush strokes, enchanting", "parameters": {"negative_prompt": "low quality, noise, dithering, ugly, disfigured"}, "output": {"url": "images/painterly-_00062_.png"}}, {"text": "painterly style, surreal, SUBJECT HERE, rich colors, detailed textures, micro detailed brush strokes, enchanting", "parameters": {"negative_prompt": "low quality, noise, dithering, ugly, disfigured"}, "output": {"url": "images/painterly-_00040_.png"}}, {"text": "painterly style, surreal, SUBJECT HERE, rich colors, detailed textures, micro detailed brush strokes, enchanting", "parameters": {"negative_prompt": "low quality, noise, dithering, ugly, disfigured"}, "output": {"url": "images/painterly-_00024_.png"}}, {"text": "painterly style, surreal, SUBJECT HERE, rich colors, detailed textures, micro detailed brush strokes, enchanting", "parameters": {"negative_prompt": "low quality, noise, dithering, ugly, disfigured"}, "output": {"url": "images/painterly-_00013_.png"}}, {"text": "painterly style, surreal, SUBJECT HERE, rich colors, detailed textures, micro detailed brush strokes, enchanting", "parameters": {"negative_prompt": "low quality, noise, dithering, ugly, disfigured"}, "output": {"url": "images/painterly-2-_00039_.png"}}, {"text": "painterly style, surreal, SUBJECT HERE, rich colors, detailed textures, micro detailed brush strokes, enchanting", "parameters": {"negative_prompt": "low quality, noise, dithering, ugly, disfigured"}, "output": {"url": "images/painterly-2-_00044_.png"}}], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "painterly style, surreal, abstract"}
BlaireSilver13/Daubrez_Painterly
null
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:artistic-2.0", "region:us" ]
null
2024-04-23T15:44:47+00:00
[]
[]
TAGS #diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-artistic-2.0 #region-us
# Daubrez_Painterly <Gallery /> ## Model description Daubrez Painterly Style Trained on 32 recent images from renowned &quot;AI thief&quot; (his words, not mine) Henry Daubrez, with no permission asked. This LoRA produces excellent painterly images that trend toward surreal and abstract with beautiful textures and expressive swirls. Images were captioned via GPTV and edited for best practices. Training was done using the prodigy optimizer for 40 epochs with a batch size of 4 and a gradient accumulation of 4. Seems to work well with a variety of models and schedulers. Make sure to follow @henrydaubrez on X to see more of his excellent original work. ## Trigger words You should use 'painterly style' to trigger the image generation. You should use 'surreal' to trigger the image generation. You should use 'abstract' to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab.
[ "# Daubrez_Painterly\n\n<Gallery />", "## Model description \n\nDaubrez Painterly Style\n\nTrained on 32 recent images from renowned &quot;AI thief&quot; (his words, not mine) Henry Daubrez, with no permission asked. This LoRA produces excellent painterly images that trend toward surreal and abstract with beautiful textures and expressive swirls. Images were captioned via GPTV and edited for best practices. Training was done using the prodigy optimizer for 40 epochs with a batch size of 4 and a gradient accumulation of 4. Seems to work well with a variety of models and schedulers. Make sure to follow @henrydaubrez on X to see more of his excellent original work.", "## Trigger words\n\nYou should use 'painterly style' to trigger the image generation.\n\nYou should use 'surreal' to trigger the image generation.\n\nYou should use 'abstract' to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
[ "TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-artistic-2.0 #region-us \n", "# Daubrez_Painterly\n\n<Gallery />", "## Model description \n\nDaubrez Painterly Style\n\nTrained on 32 recent images from renowned &quot;AI thief&quot; (his words, not mine) Henry Daubrez, with no permission asked. This LoRA produces excellent painterly images that trend toward surreal and abstract with beautiful textures and expressive swirls. Images were captioned via GPTV and edited for best practices. Training was done using the prodigy optimizer for 40 epochs with a batch size of 4 and a gradient accumulation of 4. Seems to work well with a variety of models and schedulers. Make sure to follow @henrydaubrez on X to see more of his excellent original work.", "## Trigger words\n\nYou should use 'painterly style' to trigger the image generation.\n\nYou should use 'surreal' to trigger the image generation.\n\nYou should use 'abstract' to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
text-generation
transformers
# Uploaded model - **Developed by:** alquimista888 - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/tinyllama-chat-bnb-4bit"}
alquimista888/unsloth_modelTrue
null
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/tinyllama-chat-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:47:25+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/tinyllama-chat-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: alquimista888 - License: apache-2.0 - Finetuned from model : unsloth/tinyllama-chat-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: alquimista888\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-chat-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/tinyllama-chat-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: alquimista888\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-chat-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
nem012/gemma2b-r32
null
[ "transformers", "tensorboard", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T15:49:48+00:00
[ "1910.09700" ]
[]
TAGS #transformers #tensorboard #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Critical Dream - cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.4 <Gallery /> ## Model description These are cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.4 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0, for the purposes of generating images for the [Critical Dream](https://github.com/cosmicBboy/critical-dream) project. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: stabilityai/sdxl-vae. ## Trigger words You should use a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus" to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.4/tree/main) them in the Files & versions tab. ## Tracker run link https://wandb.ai/nielsbantilan/dreambooth-lora-sd-xl/runs/tp1b5xxm ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "prompt": "a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus\"", "widget": [{"text": "a picture of [dm-matt-mercer]", "output": {"url": "image_0.png"}}, {"text": "a picture of [dm-matt-mercer]", "output": {"url": "image_1.png"}}, {"text": "a picture of a dungeon master.", "output": {"url": "image_2.png"}}, {"text": "a picture of a dungeon master.", "output": {"url": "image_3.png"}}, {"text": "a picture of [critrole-fjord], a male half-orc warlock. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_4.png"}}, {"text": "a picture of [critrole-fjord], a male half-orc warlock. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_5.png"}}, {"text": "a picture of a male half-orc warlock", "output": {"url": "image_6.png"}}, {"text": "a picture of a male half-orc warlock", "output": {"url": "image_7.png"}}, {"text": "a picture of [critrole-beau], a female human monk. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_8.png"}}, {"text": "a picture of [critrole-beau], a female human monk. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_9.png"}}, {"text": "a picture of a female human monk", "output": {"url": "image_10.png"}}, {"text": "a picture of a female human monk", "output": {"url": "image_11.png"}}, {"text": "a picture of [critrole-caduceus], a male firbolg cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_12.png"}}, {"text": "a picture of [critrole-caduceus], a male firbolg cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_13.png"}}, {"text": "a picture of a male firbolg cleric", "output": {"url": "image_14.png"}}, {"text": "a picture of a male firbolg cleric", "output": {"url": "image_15.png"}}, {"text": "a picture of [critrole-caleb], a male human wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_16.png"}}, {"text": "a picture of [critrole-caleb], a male human wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_17.png"}}, {"text": "a picture of a male human wizard", "output": {"url": "image_18.png"}}, {"text": "a picture of a male human wizard", "output": {"url": "image_19.png"}}, {"text": "a picture of [critrole-jester], a female tiefling cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_20.png"}}, {"text": "a picture of [critrole-jester], a female tiefling cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_21.png"}}, {"text": "a picture of a female tiefling cleric", "output": {"url": "image_22.png"}}, {"text": "a picture of a female tiefling cleric", "output": {"url": "image_23.png"}}, {"text": "a picture of [critrole-nott], a female goblin rogue. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_24.png"}}, {"text": "a picture of [critrole-nott], a female goblin rogue. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_25.png"}}, {"text": "a picture of a female goblin rogue", "output": {"url": "image_26.png"}}, {"text": "a picture of a female goblin rogue", "output": {"url": "image_27.png"}}, {"text": "a picture of [critrole-veth], a female halfling rogue/wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_28.png"}}, {"text": "a picture of [critrole-veth], a female halfling rogue/wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_29.png"}}, {"text": "a picture of a female halfling rogue/wizard", "output": {"url": "image_30.png"}}, {"text": "a picture of a female halfling rogue/wizard", "output": {"url": "image_31.png"}}, {"text": "a picture of [critrole-yasha], a female aasimar barbarian. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_32.png"}}, {"text": "a picture of [critrole-yasha], a female aasimar barbarian. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_33.png"}}, {"text": "a picture of a female aasimar barbarian", "output": {"url": "image_34.png"}}, {"text": "a picture of a female aasimar barbarian", "output": {"url": "image_35.png"}}, {"text": "a picture of [critrole-mollymauk], a male tiefling blood hunter. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_36.png"}}, {"text": "a picture of [critrole-mollymauk], a male tiefling blood hunter. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_37.png"}}, {"text": "a picture of a male tiefling blood hunter", "output": {"url": "image_38.png"}}, {"text": "a picture of a male tiefling blood hunter", "output": {"url": "image_39.png"}}, {"text": "a picture of [critrole-essek], a male drow wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_40.png"}}, {"text": "a picture of [critrole-essek], a male drow wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_41.png"}}, {"text": "a picture of a male drow wizard", "output": {"url": "image_42.png"}}, {"text": "a picture of a male drow wizard", "output": {"url": "image_43.png"}}]}
cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.4
null
[ "diffusers", "text-to-image", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-23T15:50:01+00:00
[]
[]
TAGS #diffusers #text-to-image #stable-diffusion-xl #stable-diffusion-xl-diffusers #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# Critical Dream - cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.4 <Gallery /> ## Model description These are cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.4 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0, for the purposes of generating images for the Critical Dream project. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: True. Special VAE used for training: stabilityai/sdxl-vae. ## Trigger words You should use a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus" to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab. ## Tracker run link URL ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# Critical Dream - cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.4\n\n<Gallery />", "## Model description\n\nThese are cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.4 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0, for the purposes of\ngenerating images for the Critical Dream\nproject.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: True.\n\nSpecial VAE used for training: stabilityai/sdxl-vae.", "## Trigger words\n\nYou should use a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus\" to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Tracker run link\n\nURL", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #text-to-image #stable-diffusion-xl #stable-diffusion-xl-diffusers #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# Critical Dream - cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.4\n\n<Gallery />", "## Model description\n\nThese are cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.4 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0, for the purposes of\ngenerating images for the Critical Dream\nproject.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: True.\n\nSpecial VAE used for training: stabilityai/sdxl-vae.", "## Trigger words\n\nYou should use a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus\" to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Tracker run link\n\nURL", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
null
null
# Llama-3-8B-16K-GGUF - Original model: [Llama-3-8B-16K](https://huggingface.co/mattshumer/Llama-3-8B-16K) <!-- description start --> ## Description This repo contains GGUF format model files for [Llama-3-8B-16K](https://huggingface.co/mattshumer/Llama-3-8B-16K). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration. * [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling. * [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration. * [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection. * [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use. * [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server. * [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents. <!-- README_GGUF.md-about-gguf end --> <!-- compatibility_gguf start --> ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: LiteLLMs/Llama-3-8B-16K-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download LiteLLMs/Llama-3-8B-16K-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download LiteLLMs/Llama-3-8B-16K-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install huggingface_hub[hf_transfer] ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Llama-3-8B-16K-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<PROMPT>", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer end --> <!-- original-model-card start --> # Original model card: Llama-3-8B-16K This is an extended (16K) context version of LLaMA 3 8B (base, not instruct). Trained for five hours on 8x A6000 GPUs, using the `Yukang/LongAlpaca-16k-length` dataset. `rope_theta` was set to `1000000.0`. Trained with Axolotl. <!-- original-model-card end -->
{"tags": ["GGUF"], "datasets": ["Yukang/LongAlpaca-16k-length"], "quantized_by": "andrijdavid"}
LiteLLMs/Llama-3-8B-16K-GGUF
null
[ "gguf", "GGUF", "dataset:Yukang/LongAlpaca-16k-length", "region:us" ]
null
2024-04-23T15:50:17+00:00
[]
[]
TAGS #gguf #GGUF #dataset-Yukang/LongAlpaca-16k-length #region-us
# Llama-3-8B-16K-GGUF - Original model: Llama-3-8B-16K ## Description This repo contains GGUF format model files for Llama-3-8B-16K. ### About GGUF GGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL. Here is an incomplete list of clients and libraries that are known to support GGUF: * URL. This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option. * text-generation-webui, Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration. * Ollama Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​ * KoboldCpp, A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling. * GPT4All, This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration. * LM Studio An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration. * LoLLMS Web UI. A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection. * URL, An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration. * llama-cpp-python, A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server. * candle, A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use. * ctransformers, A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server. * localGPT An open-source initiative enabling private conversations with documents. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw. </details> ## How to download GGUF files Note for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * URL ### In 'text-generation-webui' Under Download Model, you can enter the model repo: LiteLLMs/Llama-3-8B-16K-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-URL. Then click Download. ### On the command line, including multiple files at once I recommend using the 'huggingface-hub' Python library: Then you can download any individual model file to the current directory, at high speed, with a command like this: <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: For more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI. To accelerate downloads on fast connections (1Gbit/s or higher), install 'hf_transfer': And set environment variable 'HF_HUB_ENABLE_HF_TRANSFER' to '1': Windows Command Line users: You can set the environment variable by running 'set HF_HUB_ENABLE_HF_TRANSFER=1' before the download command. </details> ## Example 'URL' command Make sure you are using 'URL' from commit d0cee0d or later. Change '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change '-c 8192' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the '-p <PROMPT>' argument with '-i -ins' For other parameters and how to use them, please refer to the URL documentation ## How to run in 'text-generation-webui' Further instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL. ## How to run from Python code You can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: llama-cpp-python docs. #### First install the package Run one of the following commands, according to your system: #### Simple llama-cpp-python example code ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * LangChain + llama-cpp-python * LangChain + ctransformers # Original model card: Llama-3-8B-16K This is an extended (16K) context version of LLaMA 3 8B (base, not instruct). Trained for five hours on 8x A6000 GPUs, using the 'Yukang/LongAlpaca-16k-length' dataset. 'rope_theta' was set to '1000000.0'. Trained with Axolotl.
[ "# Llama-3-8B-16K-GGUF\n- Original model: Llama-3-8B-16K", "## Description\n\nThis repo contains GGUF format model files for Llama-3-8B-16K.", "### About GGUF\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n* URL. This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.\n* text-generation-webui, Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.\n* Ollama Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​\n* KoboldCpp, A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.\n* GPT4All, This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.\n* LM Studio An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.\n* LoLLMS Web UI. A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.\n* URL, An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.\n* llama-cpp-python, A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.\n* candle, A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.\n* ctransformers, A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.\n* localGPT An open-source initiative enabling private conversations with documents.", "## Explanation of quantisation methods\n<details>\n <summary>Click to see details</summary>\nThe new methods available are:\n\n* GGML_TYPE_Q2_K - \"type-1\" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)\n* GGML_TYPE_Q3_K - \"type-0\" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.\n* GGML_TYPE_Q4_K - \"type-1\" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.\n* GGML_TYPE_Q5_K - \"type-1\" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw\n* GGML_TYPE_Q6_K - \"type-0\" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.\n</details>", "## How to download GGUF files\n\nNote for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.\n\nThe following clients/libraries will automatically download models for you, providing a list of available models to choose from:\n\n* LM Studio\n* LoLLMS Web UI\n* URL", "### In 'text-generation-webui'\n\nUnder Download Model, you can enter the model repo: LiteLLMs/Llama-3-8B-16K-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-URL.\n\nThen click Download.", "### On the command line, including multiple files at once\n\nI recommend using the 'huggingface-hub' Python library:\n\n\n\nThen you can download any individual model file to the current directory, at high speed, with a command like this:\n\n\n\n<details>\n <summary>More advanced huggingface-cli download usage (click to read)</summary>\n\nYou can also download multiple files at once with a pattern:\n\n\n\nFor more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI.\n\nTo accelerate downloads on fast connections (1Gbit/s or higher), install 'hf_transfer':\n\n\n\nAnd set environment variable 'HF_HUB_ENABLE_HF_TRANSFER' to '1':\n\n\n\nWindows Command Line users: You can set the environment variable by running 'set HF_HUB_ENABLE_HF_TRANSFER=1' before the download command.\n</details>", "## Example 'URL' command\n\nMake sure you are using 'URL' from commit d0cee0d or later.\n\n\n\nChange '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.\n\nChange '-c 8192' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.\n\nIf you want to have a chat-style conversation, replace the '-p <PROMPT>' argument with '-i -ins'\n\nFor other parameters and how to use them, please refer to the URL documentation", "## How to run in 'text-generation-webui'\n\nFurther instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL.", "## How to run from Python code\n\nYou can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.", "### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.", "#### First install the package\n\nRun one of the following commands, according to your system:", "#### Simple llama-cpp-python example code", "## How to use with LangChain\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers", "# Original model card: Llama-3-8B-16K\n\n\nThis is an extended (16K) context version of LLaMA 3 8B (base, not instruct). Trained for five hours on 8x A6000 GPUs, using the 'Yukang/LongAlpaca-16k-length' dataset.\n\n'rope_theta' was set to '1000000.0'. Trained with Axolotl." ]
[ "TAGS\n#gguf #GGUF #dataset-Yukang/LongAlpaca-16k-length #region-us \n", "# Llama-3-8B-16K-GGUF\n- Original model: Llama-3-8B-16K", "## Description\n\nThis repo contains GGUF format model files for Llama-3-8B-16K.", "### About GGUF\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n* URL. This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.\n* text-generation-webui, Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.\n* Ollama Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​\n* KoboldCpp, A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.\n* GPT4All, This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.\n* LM Studio An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.\n* LoLLMS Web UI. A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.\n* URL, An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.\n* llama-cpp-python, A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.\n* candle, A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.\n* ctransformers, A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.\n* localGPT An open-source initiative enabling private conversations with documents.", "## Explanation of quantisation methods\n<details>\n <summary>Click to see details</summary>\nThe new methods available are:\n\n* GGML_TYPE_Q2_K - \"type-1\" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)\n* GGML_TYPE_Q3_K - \"type-0\" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.\n* GGML_TYPE_Q4_K - \"type-1\" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.\n* GGML_TYPE_Q5_K - \"type-1\" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw\n* GGML_TYPE_Q6_K - \"type-0\" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.\n</details>", "## How to download GGUF files\n\nNote for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.\n\nThe following clients/libraries will automatically download models for you, providing a list of available models to choose from:\n\n* LM Studio\n* LoLLMS Web UI\n* URL", "### In 'text-generation-webui'\n\nUnder Download Model, you can enter the model repo: LiteLLMs/Llama-3-8B-16K-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-URL.\n\nThen click Download.", "### On the command line, including multiple files at once\n\nI recommend using the 'huggingface-hub' Python library:\n\n\n\nThen you can download any individual model file to the current directory, at high speed, with a command like this:\n\n\n\n<details>\n <summary>More advanced huggingface-cli download usage (click to read)</summary>\n\nYou can also download multiple files at once with a pattern:\n\n\n\nFor more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI.\n\nTo accelerate downloads on fast connections (1Gbit/s or higher), install 'hf_transfer':\n\n\n\nAnd set environment variable 'HF_HUB_ENABLE_HF_TRANSFER' to '1':\n\n\n\nWindows Command Line users: You can set the environment variable by running 'set HF_HUB_ENABLE_HF_TRANSFER=1' before the download command.\n</details>", "## Example 'URL' command\n\nMake sure you are using 'URL' from commit d0cee0d or later.\n\n\n\nChange '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.\n\nChange '-c 8192' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.\n\nIf you want to have a chat-style conversation, replace the '-p <PROMPT>' argument with '-i -ins'\n\nFor other parameters and how to use them, please refer to the URL documentation", "## How to run in 'text-generation-webui'\n\nFurther instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL.", "## How to run from Python code\n\nYou can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.", "### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.", "#### First install the package\n\nRun one of the following commands, according to your system:", "#### Simple llama-cpp-python example code", "## How to use with LangChain\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers", "# Original model card: Llama-3-8B-16K\n\n\nThis is an extended (16K) context version of LLaMA 3 8B (base, not instruct). Trained for five hours on 8x A6000 GPUs, using the 'Yukang/LongAlpaca-16k-length' dataset.\n\n'rope_theta' was set to '1000000.0'. Trained with Axolotl." ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
DinoTheLewis/Llama-2-koen-Interior-SFT-13B
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T15:51:04+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.9455 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.8426 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:| | No log | 1.0 | 9 | 1.2550 | 0.0 | 0.0 | 0.0 | 0.8424 | | No log | 2.0 | 18 | 0.9704 | 0.0 | 0.0 | 0.0 | 0.8426 | | No log | 3.0 | 27 | 0.9455 | 0.0 | 0.0 | 0.0 | 0.8426 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["generator"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bert-base-cased", "model-index": [{"name": "bert-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "generator", "type": "generator", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "precision", "value": 0.0, "name": "Precision"}, {"type": "recall", "value": 0.0, "name": "Recall"}, {"type": "f1", "value": 0.0, "name": "F1"}, {"type": "accuracy", "value": 0.8426458239131839, "name": "Accuracy"}]}]}]}
Shresht-Venkat/bert-finetuned-ner
null
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:generator", "base_model:bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:51:50+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bert #token-classification #generated_from_trainer #dataset-generator #base_model-bert-base-cased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
bert-finetuned-ner ================== This model is a fine-tuned version of bert-base-cased on the generator dataset. It achieves the following results on the evaluation set: * Loss: 0.9455 * Precision: 0.0 * Recall: 0.0 * F1: 0.0 * Accuracy: 0.8426 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bert #token-classification #generated_from_trainer #dataset-generator #base_model-bert-base-cased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
null
# Dataset Card for [Needs More Information] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Web interface of the Pangloss Collection, which hosts the data sets](https://pangloss.cnrs.fr/) - **Repository:** [GithHub repository of the Pangloss Collection, which hosts the data sets](https://github.com/CNRS-LACITO/Pangloss/) - **Paper:** [A paper about the Pangloss Collection, including a presentation of the Document Type Definition](https://halshs.archives-ouvertes.fr/halshs-01003734) [A paper in French about the deposit in Zenodo](https://halshs.archives-ouvertes.fr/halshs-03475436) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Benjamin Galliot](mailto:[email protected]) ### Dataset Summary Two audio corpora of minority languages of China (Japhug and Na), with transcriptions, proposed as reference data sets for experiments in Natural Language Processing. The data, collected and transcribed in the course of immersion fieldwork, amount to a total of about 1,900 minutes in Japhug and 200 minutes in Na. By making them available in an easily accessible and usable form, we hope to facilitate the development and deployment of state-of-the-art NLP tools for the full range of human languages. There is an associated tool for assembling datasets from the Pangloss Collection (an open archive) in a way that ensures full reproducibility of experiments conducted on these data. The Document Type Definition for the XML files is available here: http://cocoon.huma-num.fr/schemas/Archive.dtd ### Supported Tasks and Leaderboards [Needs More Information] ### Languages Japhug (ISO 639-3 code: jya, Glottolog language code: japh1234) and Yongning Na (ISO 639-3 code: nru, Glottolog language code: yong1288) are two minority languages of China. The documents in the dataset have a transcription in the endangered language. Some of the documents have translations into French, English, and Chinese. ## Dataset Structure ### Data Instances A typical data row includes the path, audio, sentence, document type and several translations (depending on the sub-corpus). ` { "path": "cocoon-db3cf0e1-30bb-3225-b012-019252bb4f4d_C1/Tone_BodyPartsOfAnimals_12_F4_2008_withEGG_069.wav", "audio": "{'path': 'na/cocoon-db3cf0e1-30bb-3225-b012-019252bb4f4d_C1/Tone_BodyPartsOfAnimals_12_F4_2008_withEGG_069.wav', 'array': array([0.00018311, 0.00015259, 0.00021362, ..., 0.00030518, 0.00030518, 0.00054932], dtype=float32), 'sampling_rate': 16000}", "sentence": "ʈʂʰɯ˧ | ɖɤ˧mi˧-ɬi˧pi˩ ɲi˩", "doctype": "WORDLIST", "translation:zh": "狐狸的耳朵", "translation:fr": "oreilles de renard", "translation:en": "fox's ears", } ` ### Data Fields path: the path to the audio file;; audio: a dictionary containing the path to the audio file, the audio array and the sampling rate; sentence: the sentence the native has pronunced; doctype: the document type (a text or a word list); translation:XX: the translation of the sentence in the language XX. ### Data Splits The train, test and validation splits have all been reviewed and were splitted randomly (ratio 8:1:1) at sentence level (after the extraction from various files). ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset The dataset was collected in immersion fieldwork for language documentation. It contributes to the documentation and study of the world's languages by providing documents of connected, spontaneous speech recorded in their cultural context and transcribed in consultation with native speakers. The impacts concern research, and society at large: a guiding principle of the Pangloss Collection, which hosts the data sets, is that a close association between documentation and research is highly profitable to both. A range of possibilities for uses exist, for the scientific and speaker communities and for the general public. ### Discussion of Biases The corpora are single-speaker and hence clearly do not reflect the sociolinguistic and dialectal diversity of the languages. No claim is made that the language variety described constitutes a 'standard'. ### Other Known Limitations The translations are entirely hand-made by experts working on these languages; the amount and type of translations available varies from document to document, as not all documents have translations and not all translated documents have the same translation languages (Chinese, French, English...). ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
{"language": ["jya", "nru"], "license": "cc-by-nc-sa-4.0", "pretty_name": "Pangloss", "annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language_bcp47": ["x-japh1234", "x-yong1288"], "language_details": "jya consists of japh1234 (Glottolog code); nru consists of yong1288 (Glottolog code)", "multilinguality": ["multilingual", "translation"], "size_categories": {"yong1288": ["10K<n<100K"], "japh1234": ["10K<n<100K"]}, "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition"], "task_ids": ["speech-recognition"], "configs": [{"config_name": "yong1288", "data_files": [{"split": "train", "path": "yong1288/train.csv"}, {"split": "test", "path": "yong1288/test.csv"}, {"split": "validation", "path": "yong1288/validation.csv"}]}, {"config_name": "japh1234", "data_files": [{"split": "train", "path": "japh1234/train.csv"}, {"split": "test", "path": "japh1234/test.csv"}, {"split": "validation", "path": "japh1234/validation.csv"}]}]}
Lacito/pangloss
null
[ "jya", "nru", "license:cc-by-nc-sa-4.0", "region:us" ]
null
2024-04-23T15:55:14+00:00
[]
[ "jya", "nru" ]
TAGS #jya #nru #license-cc-by-nc-sa-4.0 #region-us
# Dataset Card for ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: Web interface of the Pangloss Collection, which hosts the data sets - Repository: GithHub repository of the Pangloss Collection, which hosts the data sets - Paper: A paper about the Pangloss Collection, including a presentation of the Document Type Definition A paper in French about the deposit in Zenodo - Leaderboard: - Point of Contact: Benjamin Galliot ### Dataset Summary Two audio corpora of minority languages of China (Japhug and Na), with transcriptions, proposed as reference data sets for experiments in Natural Language Processing. The data, collected and transcribed in the course of immersion fieldwork, amount to a total of about 1,900 minutes in Japhug and 200 minutes in Na. By making them available in an easily accessible and usable form, we hope to facilitate the development and deployment of state-of-the-art NLP tools for the full range of human languages. There is an associated tool for assembling datasets from the Pangloss Collection (an open archive) in a way that ensures full reproducibility of experiments conducted on these data. The Document Type Definition for the XML files is available here: URL ### Supported Tasks and Leaderboards ### Languages Japhug (ISO 639-3 code: jya, Glottolog language code: japh1234) and Yongning Na (ISO 639-3 code: nru, Glottolog language code: yong1288) are two minority languages of China. The documents in the dataset have a transcription in the endangered language. Some of the documents have translations into French, English, and Chinese. ## Dataset Structure ### Data Instances A typical data row includes the path, audio, sentence, document type and several translations (depending on the sub-corpus). ' { "path": "cocoon-db3cf0e1-30bb-3225-b012-019252bb4f4d_C1/Tone_BodyPartsOfAnimals_12_F4_2008_withEGG_069.wav", "audio": "{'path': 'na/cocoon-db3cf0e1-30bb-3225-b012-019252bb4f4d_C1/Tone_BodyPartsOfAnimals_12_F4_2008_withEGG_069.wav', 'array': array([0.00018311, 0.00015259, 0.00021362, ..., 0.00030518, 0.00030518, 0.00054932], dtype=float32), 'sampling_rate': 16000}", "sentence": "ʈʂʰɯ˧ | ɖɤ˧mi˧-ɬi˧pi˩ ɲi˩", "doctype": "WORDLIST", "translation:zh": "狐狸的耳朵", "translation:fr": "oreilles de renard", "translation:en": "fox's ears", } ' ### Data Fields path: the path to the audio file;; audio: a dictionary containing the path to the audio file, the audio array and the sampling rate; sentence: the sentence the native has pronunced; doctype: the document type (a text or a word list); translation:XX: the translation of the sentence in the language XX. ### Data Splits The train, test and validation splits have all been reviewed and were splitted randomly (ratio 8:1:1) at sentence level (after the extraction from various files). ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset The dataset was collected in immersion fieldwork for language documentation. It contributes to the documentation and study of the world's languages by providing documents of connected, spontaneous speech recorded in their cultural context and transcribed in consultation with native speakers. The impacts concern research, and society at large: a guiding principle of the Pangloss Collection, which hosts the data sets, is that a close association between documentation and research is highly profitable to both. A range of possibilities for uses exist, for the scientific and speaker communities and for the general public. ### Discussion of Biases The corpora are single-speaker and hence clearly do not reflect the sociolinguistic and dialectal diversity of the languages. No claim is made that the language variety described constitutes a 'standard'. ### Other Known Limitations The translations are entirely hand-made by experts working on these languages; the amount and type of translations available varies from document to document, as not all documents have translations and not all translated documents have the same translation languages (Chinese, French, English...). ## Additional Information ### Dataset Curators ### Licensing Information
[ "# Dataset Card for", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: Web interface of the Pangloss Collection, which hosts the data sets\n- Repository: GithHub repository of the Pangloss Collection, which hosts the data sets\n- Paper: A paper about the Pangloss Collection, including a presentation of the Document Type Definition\nA paper in French about the deposit in Zenodo\n- Leaderboard: \n- Point of Contact: Benjamin Galliot", "### Dataset Summary\n\nTwo audio corpora of minority languages of China (Japhug and Na), with transcriptions, proposed as reference data sets for experiments in Natural Language Processing. The data, collected and transcribed in the course of immersion fieldwork, amount to a total of about 1,900 minutes in Japhug and 200 minutes in Na. By making them available in an easily accessible and usable form, we hope to facilitate the development and deployment of state-of-the-art NLP tools for the full range of human languages. There is an associated tool for assembling datasets from the Pangloss Collection (an open archive) in a way that ensures full reproducibility of experiments conducted on these data.\nThe Document Type Definition for the XML files is available here:\nURL", "### Supported Tasks and Leaderboards", "### Languages\n\nJaphug (ISO 639-3 code: jya, Glottolog language code: japh1234) and Yongning Na (ISO 639-3 code: nru, Glottolog language code: yong1288) are two minority languages of China. The documents in the dataset have a transcription in the endangered language. Some of the documents have translations into French, English, and Chinese.", "## Dataset Structure", "### Data Instances\n\nA typical data row includes the path, audio, sentence, document type and several translations (depending on the sub-corpus).\n\n'\n{\n \"path\": \"cocoon-db3cf0e1-30bb-3225-b012-019252bb4f4d_C1/Tone_BodyPartsOfAnimals_12_F4_2008_withEGG_069.wav\",\n \"audio\": \"{'path': 'na/cocoon-db3cf0e1-30bb-3225-b012-019252bb4f4d_C1/Tone_BodyPartsOfAnimals_12_F4_2008_withEGG_069.wav', 'array': array([0.00018311, 0.00015259, 0.00021362, ..., 0.00030518, 0.00030518, 0.00054932], dtype=float32), 'sampling_rate': 16000}\",\n \"sentence\": \"ʈʂʰɯ˧ | ɖɤ˧mi˧-ɬi˧pi˩ ɲi˩\",\n \"doctype\": \"WORDLIST\",\n \"translation:zh\": \"狐狸的耳朵\",\n \"translation:fr\": \"oreilles de renard\",\n \"translation:en\": \"fox's ears\",\n}\n'", "### Data Fields\n\npath: the path to the audio file;;\n\naudio: a dictionary containing the path to the audio file, the audio array and the sampling rate;\n\nsentence: the sentence the native has pronunced;\n\ndoctype: the document type (a text or a word list);\n\ntranslation:XX: the translation of the sentence in the language XX.", "### Data Splits\n\nThe train, test and validation splits have all been reviewed and were splitted randomly (ratio 8:1:1) at sentence level (after the extraction from various files).", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe dataset was collected in immersion fieldwork for language documentation. It contributes to the documentation and study of the world's languages by providing documents of connected, spontaneous speech recorded in their cultural context and transcribed in consultation with native speakers. The impacts concern research, and society at large: a guiding principle of the Pangloss Collection, which hosts the data sets, is that a close association between documentation and research is highly profitable to both. A range of possibilities for uses exist, for the scientific and speaker communities and for the general public.", "### Discussion of Biases\n\nThe corpora are single-speaker and hence clearly do not reflect the sociolinguistic and dialectal diversity of the languages. No claim is made that the language variety described constitutes a 'standard'.", "### Other Known Limitations\n\nThe translations are entirely hand-made by experts working on these languages; the amount and type of translations available varies from document to document, as not all documents have translations and not all translated documents have the same translation languages (Chinese, French, English...).", "## Additional Information", "### Dataset Curators", "### Licensing Information" ]
[ "TAGS\n#jya #nru #license-cc-by-nc-sa-4.0 #region-us \n", "# Dataset Card for", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: Web interface of the Pangloss Collection, which hosts the data sets\n- Repository: GithHub repository of the Pangloss Collection, which hosts the data sets\n- Paper: A paper about the Pangloss Collection, including a presentation of the Document Type Definition\nA paper in French about the deposit in Zenodo\n- Leaderboard: \n- Point of Contact: Benjamin Galliot", "### Dataset Summary\n\nTwo audio corpora of minority languages of China (Japhug and Na), with transcriptions, proposed as reference data sets for experiments in Natural Language Processing. The data, collected and transcribed in the course of immersion fieldwork, amount to a total of about 1,900 minutes in Japhug and 200 minutes in Na. By making them available in an easily accessible and usable form, we hope to facilitate the development and deployment of state-of-the-art NLP tools for the full range of human languages. There is an associated tool for assembling datasets from the Pangloss Collection (an open archive) in a way that ensures full reproducibility of experiments conducted on these data.\nThe Document Type Definition for the XML files is available here:\nURL", "### Supported Tasks and Leaderboards", "### Languages\n\nJaphug (ISO 639-3 code: jya, Glottolog language code: japh1234) and Yongning Na (ISO 639-3 code: nru, Glottolog language code: yong1288) are two minority languages of China. The documents in the dataset have a transcription in the endangered language. Some of the documents have translations into French, English, and Chinese.", "## Dataset Structure", "### Data Instances\n\nA typical data row includes the path, audio, sentence, document type and several translations (depending on the sub-corpus).\n\n'\n{\n \"path\": \"cocoon-db3cf0e1-30bb-3225-b012-019252bb4f4d_C1/Tone_BodyPartsOfAnimals_12_F4_2008_withEGG_069.wav\",\n \"audio\": \"{'path': 'na/cocoon-db3cf0e1-30bb-3225-b012-019252bb4f4d_C1/Tone_BodyPartsOfAnimals_12_F4_2008_withEGG_069.wav', 'array': array([0.00018311, 0.00015259, 0.00021362, ..., 0.00030518, 0.00030518, 0.00054932], dtype=float32), 'sampling_rate': 16000}\",\n \"sentence\": \"ʈʂʰɯ˧ | ɖɤ˧mi˧-ɬi˧pi˩ ɲi˩\",\n \"doctype\": \"WORDLIST\",\n \"translation:zh\": \"狐狸的耳朵\",\n \"translation:fr\": \"oreilles de renard\",\n \"translation:en\": \"fox's ears\",\n}\n'", "### Data Fields\n\npath: the path to the audio file;;\n\naudio: a dictionary containing the path to the audio file, the audio array and the sampling rate;\n\nsentence: the sentence the native has pronunced;\n\ndoctype: the document type (a text or a word list);\n\ntranslation:XX: the translation of the sentence in the language XX.", "### Data Splits\n\nThe train, test and validation splits have all been reviewed and were splitted randomly (ratio 8:1:1) at sentence level (after the extraction from various files).", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe dataset was collected in immersion fieldwork for language documentation. It contributes to the documentation and study of the world's languages by providing documents of connected, spontaneous speech recorded in their cultural context and transcribed in consultation with native speakers. The impacts concern research, and society at large: a guiding principle of the Pangloss Collection, which hosts the data sets, is that a close association between documentation and research is highly profitable to both. A range of possibilities for uses exist, for the scientific and speaker communities and for the general public.", "### Discussion of Biases\n\nThe corpora are single-speaker and hence clearly do not reflect the sociolinguistic and dialectal diversity of the languages. No claim is made that the language variety described constitutes a 'standard'.", "### Other Known Limitations\n\nThe translations are entirely hand-made by experts working on these languages; the amount and type of translations available varies from document to document, as not all documents have translations and not all translated documents have the same translation languages (Chinese, French, English...).", "## Additional Information", "### Dataset Curators", "### Licensing Information" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-8B"}
AlienKevin/Meta-Llama-3-8B-tagllm-lang-10
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B", "region:us" ]
null
2024-04-23T15:56:24+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-8B #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-8B #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
nem012/gemma2b-r16
null
[ "transformers", "tensorboard", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T15:56:49+00:00
[ "1910.09700" ]
[]
TAGS #transformers #tensorboard #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
help writing books (sumarize/ style / rythme / stats / redundancy / sentence autocompletion / etc...) version 0.p
{"license": "apache-2.0"}
blaackjack/Coach_Scrib
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-23T15:58:04+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
help writing books (sumarize/ style / rythme / stats / redundancy / sentence autocompletion / etc...) version 0.p
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
token-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
xilpam/v_1_test_3_layoutlm-funsd-tf
null
[ "transformers", "safetensors", "layoutlm", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-23T15:58:10+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #layoutlm #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #layoutlm #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-image
diffusers
This is 1 step inference HyperSD SDXL model used with [FastSD CPU](https://github.com/rupeshs/fastsdcpu)
{"license": "openrail++"}
rupeshs/hyper-sd-sdxl-1-step
null
[ "diffusers", "safetensors", "license:openrail++", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-04-23T15:58:40+00:00
[]
[]
TAGS #diffusers #safetensors #license-openrail++ #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us
This is 1 step inference HyperSD SDXL model used with FastSD CPU
[]
[ "TAGS\n#diffusers #safetensors #license-openrail++ #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Reem333/LongFormer-Paper-Citaion-Classifier
null
[ "transformers", "safetensors", "longformer", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-23T16:01:20+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #longformer #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #longformer #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Uploaded model - **Developed by:** donlinglok - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
donlinglok/llama-3-8b-jy-bnb-4bit
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-23T16:01:41+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: donlinglok - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: donlinglok\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: donlinglok\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
kawagoshi-llm-team/llama2_multinode_test
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-23T16:02:08+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "bert-base-uncased", "model-index": [{"name": "bert-base-uncased-finetuned-ner", "results": []}]}
Khetnhio/bert-base-uncased-finetuned-ner
null
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-23T16:02:13+00:00
[]
[]
TAGS #transformers #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# bert-base-uncased-finetuned-ner This model is a fine-tuned version of bert-base-uncased on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# bert-base-uncased-finetuned-ner\n\nThis model is a fine-tuned version of bert-base-uncased on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 6", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# bert-base-uncased-finetuned-ner\n\nThis model is a fine-tuned version of bert-base-uncased on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 6", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
null
null
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/R6m6l4ohxIOVYeQdYCz09.png) > [!IMPORTANT] > Outdated GGUFs, check [here](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF) for quants made with newer version of llamacpp Some GGUF quants of [xxx777xxxASD/ChaoticSoliloquy-4x8B](https://huggingface.co/xxx777xxxASD/ChaoticSoliloquy-4x8B)
{"language": ["en"], "license": "llama3", "tags": ["moe"]}
xxx777xxxASD/ChaoticSoliloquy-4x8B-GGUF
null
[ "gguf", "moe", "en", "license:llama3", "region:us" ]
null
2024-04-23T16:03:50+00:00
[]
[ "en" ]
TAGS #gguf #moe #en #license-llama3 #region-us
!image/png > [!IMPORTANT] > Outdated GGUFs, check here for quants made with newer version of llamacpp Some GGUF quants of xxx777xxxASD/ChaoticSoliloquy-4x8B
[]
[ "TAGS\n#gguf #moe #en #license-llama3 #region-us \n" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
liserman/parlbert_climate_change_blame_v02
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-23T16:04:30+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Markhit/CodeLlama3-8B-Python <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["code"], "datasets": ["ajibawa-2023/Python-Code-23k-ShareGPT"], "base_model": "Markhit/CodeLlama3-8B-Python", "license_link": "LICENSE", "quantized_by": "mradermacher"}
mradermacher/CodeLlama3-8B-Python-GGUF
null
[ "transformers", "gguf", "code", "en", "dataset:ajibawa-2023/Python-Code-23k-ShareGPT", "base_model:Markhit/CodeLlama3-8B-Python", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-04-23T16:05:12+00:00
[]
[ "en" ]
TAGS #transformers #gguf #code #en #dataset-ajibawa-2023/Python-Code-23k-ShareGPT #base_model-Markhit/CodeLlama3-8B-Python #license-llama3 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #code #en #dataset-ajibawa-2023/Python-Code-23k-ShareGPT #base_model-Markhit/CodeLlama3-8B-Python #license-llama3 #endpoints_compatible #region-us \n" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Duakovui/viT5_instruct_uit_ate1
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-23T16:06:26+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]