modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-26 12:28:17
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-26 12:22:02
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
hkivancoral/hushem_40x_deit_tiny_sgd_00001_fold1
|
hkivancoral
| 2023-12-24T17:36:39Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T17:06:44Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_sgd_00001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.28888888888888886
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_sgd_00001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3702
- Accuracy: 0.2889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.5833 | 1.0 | 215 | 1.4002 | 0.1778 |
| 1.5381 | 2.0 | 430 | 1.3990 | 0.2 |
| 1.505 | 3.0 | 645 | 1.3978 | 0.2222 |
| 1.446 | 4.0 | 860 | 1.3967 | 0.2444 |
| 1.4742 | 5.0 | 1075 | 1.3956 | 0.2222 |
| 1.3991 | 6.0 | 1290 | 1.3945 | 0.2222 |
| 1.4142 | 7.0 | 1505 | 1.3933 | 0.2222 |
| 1.4895 | 8.0 | 1720 | 1.3923 | 0.2222 |
| 1.4297 | 9.0 | 1935 | 1.3912 | 0.2222 |
| 1.4803 | 10.0 | 2150 | 1.3901 | 0.2222 |
| 1.4253 | 11.0 | 2365 | 1.3890 | 0.2222 |
| 1.4151 | 12.0 | 2580 | 1.3880 | 0.2222 |
| 1.3649 | 13.0 | 2795 | 1.3870 | 0.2222 |
| 1.4058 | 14.0 | 3010 | 1.3860 | 0.2444 |
| 1.3858 | 15.0 | 3225 | 1.3850 | 0.2444 |
| 1.3985 | 16.0 | 3440 | 1.3841 | 0.2444 |
| 1.4078 | 17.0 | 3655 | 1.3832 | 0.2444 |
| 1.3916 | 18.0 | 3870 | 1.3823 | 0.2444 |
| 1.4138 | 19.0 | 4085 | 1.3814 | 0.2444 |
| 1.3697 | 20.0 | 4300 | 1.3807 | 0.2444 |
| 1.3976 | 21.0 | 4515 | 1.3799 | 0.2444 |
| 1.45 | 22.0 | 4730 | 1.3791 | 0.2444 |
| 1.3757 | 23.0 | 4945 | 1.3784 | 0.2444 |
| 1.4088 | 24.0 | 5160 | 1.3777 | 0.2667 |
| 1.3948 | 25.0 | 5375 | 1.3771 | 0.2667 |
| 1.3916 | 26.0 | 5590 | 1.3764 | 0.2667 |
| 1.3383 | 27.0 | 5805 | 1.3759 | 0.2667 |
| 1.3507 | 28.0 | 6020 | 1.3753 | 0.2889 |
| 1.3823 | 29.0 | 6235 | 1.3748 | 0.2889 |
| 1.3489 | 30.0 | 6450 | 1.3743 | 0.2889 |
| 1.3905 | 31.0 | 6665 | 1.3738 | 0.2889 |
| 1.3646 | 32.0 | 6880 | 1.3734 | 0.2889 |
| 1.394 | 33.0 | 7095 | 1.3730 | 0.2889 |
| 1.3256 | 34.0 | 7310 | 1.3726 | 0.2889 |
| 1.342 | 35.0 | 7525 | 1.3723 | 0.2889 |
| 1.3277 | 36.0 | 7740 | 1.3720 | 0.2889 |
| 1.3815 | 37.0 | 7955 | 1.3717 | 0.2889 |
| 1.3516 | 38.0 | 8170 | 1.3714 | 0.2889 |
| 1.3573 | 39.0 | 8385 | 1.3712 | 0.2889 |
| 1.3764 | 40.0 | 8600 | 1.3710 | 0.2889 |
| 1.3508 | 41.0 | 8815 | 1.3708 | 0.2889 |
| 1.4032 | 42.0 | 9030 | 1.3707 | 0.2889 |
| 1.3548 | 43.0 | 9245 | 1.3705 | 0.2889 |
| 1.3623 | 44.0 | 9460 | 1.3704 | 0.2889 |
| 1.3744 | 45.0 | 9675 | 1.3704 | 0.2889 |
| 1.3298 | 46.0 | 9890 | 1.3703 | 0.2889 |
| 1.352 | 47.0 | 10105 | 1.3703 | 0.2889 |
| 1.363 | 48.0 | 10320 | 1.3702 | 0.2889 |
| 1.3844 | 49.0 | 10535 | 1.3702 | 0.2889 |
| 1.3587 | 50.0 | 10750 | 1.3702 | 0.2889 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
londe33/hair_v02
|
londe33
| 2023-12-24T17:36:19Z | 428 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T17:36:07Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: hair_v02
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8399999737739563
---
# hair_v02
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Brown hair

#### Red hair

#### black hair

#### blond hair

|
pocper1/bert_model-1
|
pocper1
| 2023-12-24T17:34:38Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:ckiplab/bert-base-chinese",
"base_model:finetune:ckiplab/bert-base-chinese",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-12-24T16:37:54Z |
---
license: gpl-3.0
base_model: ckiplab/bert-base-chinese
tags:
- generated_from_trainer
model-index:
- name: bert_model-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_model-1
This model is a fine-tuned version of [ckiplab/bert-base-chinese](https://huggingface.co/ckiplab/bert-base-chinese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3994 | 0.01 | 100 | 1.6845 |
| 1.8101 | 0.03 | 200 | 1.6963 |
| 1.7742 | 0.04 | 300 | 1.6679 |
| 1.8425 | 0.05 | 400 | 1.6657 |
| 1.8452 | 0.06 | 500 | 1.6369 |
| 1.8109 | 0.08 | 600 | 1.6471 |
| 1.8469 | 0.09 | 700 | 1.6350 |
| 1.7709 | 0.1 | 800 | 1.6302 |
| 1.7848 | 0.12 | 900 | 1.6346 |
| 1.7955 | 0.13 | 1000 | 1.6345 |
| 1.79 | 0.14 | 1100 | 1.6356 |
| 1.7655 | 0.16 | 1200 | 1.6116 |
| 1.7826 | 0.17 | 1300 | 1.6248 |
| 1.7651 | 0.18 | 1400 | 1.6262 |
| 1.7639 | 0.19 | 1500 | 1.6078 |
| 1.7743 | 0.21 | 1600 | 1.6105 |
| 1.7672 | 0.22 | 1700 | 1.5910 |
| 1.7054 | 0.23 | 1800 | 1.6060 |
| 1.6777 | 0.25 | 1900 | 1.6253 |
| 1.748 | 0.26 | 2000 | 1.5970 |
| 1.7503 | 0.27 | 2100 | 1.5893 |
| 1.7329 | 0.29 | 2200 | 1.5883 |
| 1.6826 | 0.3 | 2300 | 1.5781 |
| 1.7237 | 0.31 | 2400 | 1.5716 |
| 1.7358 | 0.32 | 2500 | 1.5671 |
| 1.7093 | 0.34 | 2600 | 1.5689 |
| 1.6771 | 0.35 | 2700 | 1.5654 |
| 1.6924 | 0.36 | 2800 | 1.5729 |
| 1.6768 | 0.38 | 2900 | 1.5545 |
| 1.7158 | 0.39 | 3000 | 1.5471 |
| 1.6808 | 0.4 | 3100 | 1.5415 |
| 1.6547 | 0.42 | 3200 | 1.5444 |
| 1.6557 | 0.43 | 3300 | 1.5400 |
| 1.6491 | 0.44 | 3400 | 1.5358 |
| 1.6757 | 0.45 | 3500 | 1.5244 |
| 1.6473 | 0.47 | 3600 | 1.5268 |
| 1.5987 | 0.48 | 3700 | 1.5201 |
| 1.6386 | 0.49 | 3800 | 1.5121 |
| 1.6568 | 0.51 | 3900 | 1.5004 |
| 1.6454 | 0.52 | 4000 | 1.4895 |
| 1.6175 | 0.53 | 4100 | 1.4974 |
| 1.6036 | 0.55 | 4200 | 1.4964 |
| 1.5785 | 0.56 | 4300 | 1.4882 |
| 1.6009 | 0.57 | 4400 | 1.4858 |
| 1.5723 | 0.58 | 4500 | 1.4755 |
| 1.6133 | 0.6 | 4600 | 1.4751 |
| 1.5683 | 0.61 | 4700 | 1.4692 |
| 1.5773 | 0.62 | 4800 | 1.4677 |
| 1.6005 | 0.64 | 4900 | 1.4645 |
| 1.5812 | 0.65 | 5000 | 1.4596 |
| 1.577 | 0.66 | 5100 | 1.4506 |
| 1.591 | 0.68 | 5200 | 1.4507 |
| 1.5609 | 0.69 | 5300 | 1.4474 |
| 1.5437 | 0.7 | 5400 | 1.4441 |
| 1.5535 | 0.71 | 5500 | 1.4430 |
| 1.5882 | 0.73 | 5600 | 1.4398 |
| 1.5731 | 0.74 | 5700 | 1.4328 |
| 1.5511 | 0.75 | 5800 | 1.4280 |
| 1.5455 | 0.77 | 5900 | 1.4358 |
| 1.5194 | 0.78 | 6000 | 1.4321 |
| 1.5524 | 0.79 | 6100 | 1.4207 |
| 1.5406 | 0.81 | 6200 | 1.4215 |
| 1.4811 | 0.82 | 6300 | 1.4293 |
| 1.5117 | 0.83 | 6400 | 1.4282 |
| 1.5197 | 0.84 | 6500 | 1.4109 |
| 1.558 | 0.86 | 6600 | 1.4241 |
| 1.5277 | 0.87 | 6700 | 1.4116 |
| 1.5346 | 0.88 | 6800 | 1.4190 |
| 1.4974 | 0.9 | 6900 | 1.4105 |
| 1.5345 | 0.91 | 7000 | 1.4163 |
| 1.5578 | 0.92 | 7100 | 1.4099 |
| 1.496 | 0.94 | 7200 | 1.4120 |
| 1.5192 | 0.95 | 7300 | 1.4073 |
| 1.456 | 0.96 | 7400 | 1.4105 |
| 1.4821 | 0.97 | 7500 | 1.4175 |
| 1.5331 | 0.99 | 7600 | 1.4135 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
iloncka/fastvit_t8.apple_in1k_ep_20
|
iloncka
| 2023-12-24T17:27:47Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-12-24T17:24:17Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
hkivancoral/hushem_40x_deit_base_rms_00001_fold2
|
hkivancoral
| 2023-12-24T17:25:17Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T16:46:59Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_rms_00001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7777777777777778
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_rms_00001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2145
- Accuracy: 0.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0124 | 1.0 | 215 | 0.9769 | 0.7556 |
| 0.0003 | 2.0 | 430 | 1.1164 | 0.7556 |
| 0.0001 | 3.0 | 645 | 1.2999 | 0.7556 |
| 0.0 | 4.0 | 860 | 1.4171 | 0.7556 |
| 0.0 | 5.0 | 1075 | 1.5668 | 0.7778 |
| 0.0 | 6.0 | 1290 | 1.6850 | 0.7778 |
| 0.0 | 7.0 | 1505 | 1.8146 | 0.7778 |
| 0.0 | 8.0 | 1720 | 1.9589 | 0.7778 |
| 0.0 | 9.0 | 1935 | 2.1064 | 0.8 |
| 0.0 | 10.0 | 2150 | 2.2093 | 0.8 |
| 0.0 | 11.0 | 2365 | 2.2933 | 0.8 |
| 0.0 | 12.0 | 2580 | 2.3766 | 0.8 |
| 0.0 | 13.0 | 2795 | 2.4083 | 0.7778 |
| 0.0 | 14.0 | 3010 | 2.4352 | 0.7778 |
| 0.0 | 15.0 | 3225 | 2.4429 | 0.7778 |
| 0.0 | 16.0 | 3440 | 2.4405 | 0.7778 |
| 0.0 | 17.0 | 3655 | 2.4464 | 0.7778 |
| 0.0 | 18.0 | 3870 | 2.4337 | 0.7778 |
| 0.0 | 19.0 | 4085 | 2.4439 | 0.7778 |
| 0.0 | 20.0 | 4300 | 2.4205 | 0.7778 |
| 0.0 | 21.0 | 4515 | 2.4211 | 0.7778 |
| 0.0 | 22.0 | 4730 | 2.4042 | 0.7778 |
| 0.0 | 23.0 | 4945 | 2.3825 | 0.7778 |
| 0.0 | 24.0 | 5160 | 2.3776 | 0.7778 |
| 0.0 | 25.0 | 5375 | 2.3705 | 0.7778 |
| 0.0 | 26.0 | 5590 | 2.3563 | 0.7778 |
| 0.0 | 27.0 | 5805 | 2.3321 | 0.7778 |
| 0.0 | 28.0 | 6020 | 2.3284 | 0.7778 |
| 0.0 | 29.0 | 6235 | 2.3256 | 0.7778 |
| 0.0 | 30.0 | 6450 | 2.3054 | 0.7778 |
| 0.0 | 31.0 | 6665 | 2.2910 | 0.7778 |
| 0.0 | 32.0 | 6880 | 2.2963 | 0.7778 |
| 0.0 | 33.0 | 7095 | 2.2902 | 0.7778 |
| 0.0 | 34.0 | 7310 | 2.2745 | 0.7778 |
| 0.0 | 35.0 | 7525 | 2.2617 | 0.7778 |
| 0.0 | 36.0 | 7740 | 2.2546 | 0.7778 |
| 0.0 | 37.0 | 7955 | 2.2630 | 0.7778 |
| 0.0 | 38.0 | 8170 | 2.2430 | 0.7778 |
| 0.0 | 39.0 | 8385 | 2.2389 | 0.7778 |
| 0.0 | 40.0 | 8600 | 2.2433 | 0.7778 |
| 0.0 | 41.0 | 8815 | 2.2306 | 0.7778 |
| 0.0 | 42.0 | 9030 | 2.2253 | 0.7778 |
| 0.0 | 43.0 | 9245 | 2.2215 | 0.7778 |
| 0.0 | 44.0 | 9460 | 2.2183 | 0.7778 |
| 0.0 | 45.0 | 9675 | 2.2187 | 0.7778 |
| 0.0 | 46.0 | 9890 | 2.2190 | 0.7778 |
| 0.0 | 47.0 | 10105 | 2.2156 | 0.7778 |
| 0.0 | 48.0 | 10320 | 2.2160 | 0.7778 |
| 0.0 | 49.0 | 10535 | 2.2147 | 0.7778 |
| 0.0 | 50.0 | 10750 | 2.2145 | 0.7778 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
TheBloke/LMCocktail-10.7B-v1-AWQ
|
TheBloke
| 2023-12-24T17:22:50Z | 43 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2311.13534",
"base_model:Yhyu13/LMCocktail-10.7B-v1",
"base_model:quantized:Yhyu13/LMCocktail-10.7B-v1",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2023-12-24T16:55:54Z |
---
base_model: Yhyu13/LMCocktail-10.7B-v1
inference: false
license: llama2
model_creator: Yu
model_name: LMCocktail 10.7B v1
model_type: solar
prompt_template: '### System:
{system_message}
### User:
{prompt}
### Assistant:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# LMCocktail 10.7B v1 - AWQ
- Model creator: [Yu](https://huggingface.co/Yhyu13)
- Original model: [LMCocktail 10.7B v1](https://huggingface.co/Yhyu13/LMCocktail-10.7B-v1)
<!-- description start -->
## Description
This repo contains AWQ model files for [Yu's LMCocktail 10.7B v1](https://huggingface.co/Yhyu13/LMCocktail-10.7B-v1).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/LMCocktail-10.7B-v1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LMCocktail-10.7B-v1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LMCocktail-10.7B-v1-GGUF)
* [Yu's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Yhyu13/LMCocktail-10.7B-v1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: System-User-Assistant
```
### System:
{system_message}
### User:
{prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/LMCocktail-10.7B-v1-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 5.96 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/LMCocktail-10.7B-v1-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `LMCocktail-10.7B-v1-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/LMCocktail-10.7B-v1-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''### System:
{system_message}
### User:
{prompt}
### Assistant:
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/LMCocktail-10.7B-v1-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/LMCocktail-10.7B-v1-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''### System:
{system_message}
### User:
{prompt}
### Assistant:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/LMCocktail-10.7B-v1-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''### System:
{system_message}
### User:
{prompt}
### Assistant:
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Yu's LMCocktail 10.7B v1
# LM-cocktail 10.7B v1
This is a 50%-50% model of the SOLAR model and meow.
https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0
https://huggingface.co/rishiraj/meow
who rank #1 and #2 among models <13B in the https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard by 2023/12/20.
# Alpaca Eval
I am thrilled to announce that ChatGPT has ranked LMCocktail 10.7B as the second best model next to GPT4 on AlpcaEval in my local community run. You can also check the leaderboard at [./alpaca_eval/chatgpt_fn_--SOLAR-10-7B-LMCocktail/](./alpaca_eval/chatgpt_fn_--SOLAR-10-7B-LMCocktail/)
```
win_rate standard_error n_total avg_length
gpt4 73.79 1.54 805 1365
SOLAR-10.7B-LMCocktail(new)73.45 1.56 804 1203
claude 70.37 1.60 805 1082
chatgpt 66.09 1.66 805 811
wizardlm-13b 65.16 1.67 805 985
vicuna-13b 64.10 1.69 805 1037
guanaco-65b 62.36 1.71 805 1249
oasst-rlhf-llama-33b 62.05 1.71 805 1079
alpaca-farm-ppo-human 60.25 1.72 805 803
falcon-40b-instruct 56.52 1.74 805 662
text_davinci_003 50.00 0.00 805 307
alpaca-7b 45.22 1.74 805 396
text_davinci_001 28.07 1.56 805 296
```
# Code
The LM-cocktail is novel technique for merging multiple models https://arxiv.org/abs/2311.13534
Code is backed up by this repo https://github.com/FlagOpen/FlagEmbedding.git
Merging scripts available under the [./scripts](./scripts) folder
# Result
The SOLAR model is the first model <30B that can answer this question from my test:
```
What will AI be like in the year 1010 A.D?
```
without hullicinating into 1010 A.D is a future time (like other llama2 models)
Models greater than that, like Yi-34B could answer this paradoxic question correctly as well, since it is huge enough.
### SOLAR 10.7B output

### LMCocktail 10.7B output1

### LMCocktail 10.7B output2

|
TheBloke/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF
|
TheBloke
| 2023-12-24T17:17:17Z | 92 | 3 |
transformers
|
[
"transformers",
"gguf",
"mixtral",
"text-generation",
"dataset:databricks/databricks-dolly-15k",
"base_model:Brillibits/Instruct_Mixtral-8x7B-v0.1_Dolly15K",
"base_model:quantized:Brillibits/Instruct_Mixtral-8x7B-v0.1_Dolly15K",
"license:apache-2.0",
"region:us",
"conversational"
] |
text-generation
| 2023-12-24T17:05:01Z |
---
base_model: Brillibits/Instruct_Mixtral-8x7B-v0.1_Dolly15K
datasets:
- databricks/databricks-dolly-15k
inference: false
license: apache-2.0
model_creator: Brillibits
model_name: Instruct Mixtral 8X7B V0.1 Dolly15K
model_type: mixtral
pipeline_tag: text-generation
prompt_template: '{prompt}
Output:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Instruct Mixtral 8X7B V0.1 Dolly15K - GGUF
- Model creator: [Brillibits](https://huggingface.co/Brillibits)
- Original model: [Instruct Mixtral 8X7B V0.1 Dolly15K](https://huggingface.co/Brillibits/Instruct_Mixtral-8x7B-v0.1_Dolly15K)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Brillibits's Instruct Mixtral 8X7B V0.1 Dolly15K](https://huggingface.co/Brillibits/Instruct_Mixtral-8x7B-v0.1_Dolly15K).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Instruct_Mixtral-8x7B-v0.1_Dolly15K-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF)
* [Brillibits's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Brillibits/Instruct_Mixtral-8x7B-v0.1_Dolly15K)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Output
```
{prompt}
Output:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [instruct_mixtral-8x7b-v0.1_dolly15k.Q2_K.gguf](https://huggingface.co/TheBloke/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/instruct_mixtral-8x7b-v0.1_dolly15k.Q2_K.gguf) | Q2_K | 2 | 15.64 GB| 18.14 GB | smallest, significant quality loss - not recommended for most purposes |
| [instruct_mixtral-8x7b-v0.1_dolly15k.Q3_K_M.gguf](https://huggingface.co/TheBloke/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/instruct_mixtral-8x7b-v0.1_dolly15k.Q3_K_M.gguf) | Q3_K_M | 3 | 20.36 GB| 22.86 GB | very small, high quality loss |
| [instruct_mixtral-8x7b-v0.1_dolly15k.Q4_0.gguf](https://huggingface.co/TheBloke/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/instruct_mixtral-8x7b-v0.1_dolly15k.Q4_0.gguf) | Q4_0 | 4 | 26.44 GB| 28.94 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [instruct_mixtral-8x7b-v0.1_dolly15k.Q4_K_M.gguf](https://huggingface.co/TheBloke/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/instruct_mixtral-8x7b-v0.1_dolly15k.Q4_K_M.gguf) | Q4_K_M | 4 | 26.44 GB| 28.94 GB | medium, balanced quality - recommended |
| [instruct_mixtral-8x7b-v0.1_dolly15k.Q5_0.gguf](https://huggingface.co/TheBloke/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/instruct_mixtral-8x7b-v0.1_dolly15k.Q5_0.gguf) | Q5_0 | 5 | 32.23 GB| 34.73 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [instruct_mixtral-8x7b-v0.1_dolly15k.Q5_K_M.gguf](https://huggingface.co/TheBloke/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/instruct_mixtral-8x7b-v0.1_dolly15k.Q5_K_M.gguf) | Q5_K_M | 5 | 32.23 GB| 34.73 GB | large, very low quality loss - recommended |
| [instruct_mixtral-8x7b-v0.1_dolly15k.Q6_K.gguf](https://huggingface.co/TheBloke/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/instruct_mixtral-8x7b-v0.1_dolly15k.Q6_K.gguf) | Q6_K | 6 | 38.38 GB| 40.88 GB | very large, extremely low quality loss |
| [instruct_mixtral-8x7b-v0.1_dolly15k.Q8_0.gguf](https://huggingface.co/TheBloke/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/instruct_mixtral-8x7b-v0.1_dolly15k.Q8_0.gguf) | Q8_0 | 8 | 49.63 GB| 52.13 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF and below it, a specific filename to download, such as: instruct_mixtral-8x7b-v0.1_dolly15k.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF instruct_mixtral-8x7b-v0.1_dolly15k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF instruct_mixtral-8x7b-v0.1_dolly15k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m instruct_mixtral-8x7b-v0.1_dolly15k.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}\n\nOutput:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./instruct_mixtral-8x7b-v0.1_dolly15k.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"{prompt}\n\nOutput:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./instruct_mixtral-8x7b-v0.1_dolly15k.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Brillibits's Instruct Mixtral 8X7B V0.1 Dolly15K
# Instruct_Mixtral-8x7B-v0.1_Dolly15K
Fine-tuned from Mixtral-8x7B-v0.1, used Dolly15k for the dataset. 85% for training, 14.9% validation, 0.1% test. Trained for 1.0 epochs using QLora. Trained with 1024 context window.
# Model Details
* **Trained by**: trained by [Brillibits](https://www.youtube.com/@Brillibits).
* **Model type:** **Instruct_Mixtral-8x7B-v0.1_Dolly15K** is an auto-regressive language model based on the Llama 2 transformer architecture.
* **Language(s)**: English
* **License for Instruct_Mixtral-8x7B-v0.1_Dolly15K**: apache-2.0 license
# Prompting
## Prompt Template With Context
```
Write a 10-line poem about a given topic
Input:
The topic is about racecars
Output:
```
## Prompt Template Without Context
```
Who was the was the second president of the United States?
Output:
```
## Professional Assistance
This model and other models like it are great, but where LLMs hold the most promise is when they are applied on custom data to automate a wide variety of tasks
If you have a dataset and want to see if you might be able to apply that data to automate some tasks, and you are looking for professional assistance, contact me [here](mailto:[email protected])
<!-- original-model-card end -->
|
hkivancoral/hushem_40x_deit_small_adamax_001_fold3
|
hkivancoral
| 2023-12-24T17:16:42Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T16:54:11Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_adamax_001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.813953488372093
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0018
- Accuracy: 0.8140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2113 | 1.0 | 217 | 1.0639 | 0.7209 |
| 0.3075 | 2.0 | 434 | 0.6999 | 0.7442 |
| 0.0797 | 3.0 | 651 | 1.4112 | 0.7209 |
| 0.0613 | 4.0 | 868 | 0.8895 | 0.8605 |
| 0.0448 | 5.0 | 1085 | 0.8165 | 0.8140 |
| 0.0133 | 6.0 | 1302 | 1.2281 | 0.7907 |
| 0.0099 | 7.0 | 1519 | 1.6935 | 0.7907 |
| 0.0195 | 8.0 | 1736 | 0.9261 | 0.8837 |
| 0.0441 | 9.0 | 1953 | 0.6136 | 0.8605 |
| 0.0408 | 10.0 | 2170 | 1.0937 | 0.8605 |
| 0.0001 | 11.0 | 2387 | 1.3536 | 0.8372 |
| 0.0014 | 12.0 | 2604 | 1.5056 | 0.8372 |
| 0.0152 | 13.0 | 2821 | 1.3542 | 0.8140 |
| 0.0011 | 14.0 | 3038 | 1.1435 | 0.8140 |
| 0.0006 | 15.0 | 3255 | 1.7874 | 0.7907 |
| 0.0244 | 16.0 | 3472 | 1.5609 | 0.8140 |
| 0.0 | 17.0 | 3689 | 0.9143 | 0.9070 |
| 0.0 | 18.0 | 3906 | 1.3119 | 0.8140 |
| 0.0 | 19.0 | 4123 | 1.5264 | 0.8372 |
| 0.0024 | 20.0 | 4340 | 1.6055 | 0.8140 |
| 0.0 | 21.0 | 4557 | 1.7071 | 0.8140 |
| 0.0 | 22.0 | 4774 | 1.6943 | 0.8140 |
| 0.0 | 23.0 | 4991 | 1.6871 | 0.8140 |
| 0.0 | 24.0 | 5208 | 1.6854 | 0.8140 |
| 0.0 | 25.0 | 5425 | 1.6881 | 0.8140 |
| 0.0 | 26.0 | 5642 | 1.6930 | 0.8140 |
| 0.0 | 27.0 | 5859 | 1.6999 | 0.8140 |
| 0.0 | 28.0 | 6076 | 1.7095 | 0.8140 |
| 0.0 | 29.0 | 6293 | 1.7201 | 0.8140 |
| 0.0 | 30.0 | 6510 | 1.7321 | 0.8140 |
| 0.0 | 31.0 | 6727 | 1.7453 | 0.8140 |
| 0.0 | 32.0 | 6944 | 1.7591 | 0.8140 |
| 0.0 | 33.0 | 7161 | 1.7739 | 0.8140 |
| 0.0 | 34.0 | 7378 | 1.7893 | 0.8140 |
| 0.0 | 35.0 | 7595 | 1.8052 | 0.8140 |
| 0.0 | 36.0 | 7812 | 1.8215 | 0.8140 |
| 0.0 | 37.0 | 8029 | 1.8380 | 0.8140 |
| 0.0 | 38.0 | 8246 | 1.8542 | 0.8140 |
| 0.0 | 39.0 | 8463 | 1.8709 | 0.8140 |
| 0.0 | 40.0 | 8680 | 1.8874 | 0.8140 |
| 0.0 | 41.0 | 8897 | 1.9038 | 0.8140 |
| 0.0 | 42.0 | 9114 | 1.9194 | 0.8140 |
| 0.0 | 43.0 | 9331 | 1.9350 | 0.8140 |
| 0.0 | 44.0 | 9548 | 1.9494 | 0.8140 |
| 0.0 | 45.0 | 9765 | 1.9631 | 0.8140 |
| 0.0 | 46.0 | 9982 | 1.9753 | 0.8140 |
| 0.0 | 47.0 | 10199 | 1.9864 | 0.8140 |
| 0.0 | 48.0 | 10416 | 1.9949 | 0.8140 |
| 0.0 | 49.0 | 10633 | 2.0003 | 0.8140 |
| 0.0 | 50.0 | 10850 | 2.0018 | 0.8140 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
iloncka/tiny_vit_21m_224.in1k_ep_20
|
iloncka
| 2023-12-24T17:12:13Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-12-24T17:07:25Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
ThuyNT03/KLTN_COQE_viT5_total_SPAOL_v5
|
ThuyNT03
| 2023-12-24T17:04:43Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:ThuyNT03/KLTN_COQE_viT5_total_SPAOL_v4",
"base_model:finetune:ThuyNT03/KLTN_COQE_viT5_total_SPAOL_v4",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-24T16:21:27Z |
---
license: mit
base_model: ThuyNT03/KLTN_COQE_viT5_total_SPAOL_v4
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_total_SPAOL_v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_total_SPAOL_v5
This model is a fine-tuned version of [ThuyNT03/KLTN_COQE_viT5_total_SPAOL_v4](https://huggingface.co/ThuyNT03/KLTN_COQE_viT5_total_SPAOL_v4) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
ThuyNT03/KLTN_COQE_viT5_total_POASL_v5
|
ThuyNT03
| 2023-12-24T16:56:53Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:ThuyNT03/KLTN_COQE_viT5_total_POASL_v4",
"base_model:finetune:ThuyNT03/KLTN_COQE_viT5_total_POASL_v4",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-24T16:15:28Z |
---
license: mit
base_model: ThuyNT03/KLTN_COQE_viT5_total_POASL_v4
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_total_POASL_v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_total_POASL_v5
This model is a fine-tuned version of [ThuyNT03/KLTN_COQE_viT5_total_POASL_v4](https://huggingface.co/ThuyNT03/KLTN_COQE_viT5_total_POASL_v4) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
ai-simonsk13/MentalHealth-Openchat-7b-finetune
|
ai-simonsk13
| 2023-12-24T16:54:06Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openchat/openchat-3.5-1210",
"base_model:adapter:openchat/openchat-3.5-1210",
"region:us"
] | null | 2023-12-24T16:54:00Z |
---
library_name: peft
base_model: openchat/openchat-3.5-1210
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
hkivancoral/hushem_40x_deit_small_adamax_001_fold2
|
hkivancoral
| 2023-12-24T16:54:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T16:32:18Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_adamax_001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7333333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2920
- Accuracy: 0.7333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1775 | 1.0 | 215 | 1.6855 | 0.7111 |
| 0.1537 | 2.0 | 430 | 1.3524 | 0.7111 |
| 0.0687 | 3.0 | 645 | 2.1272 | 0.7333 |
| 0.0127 | 4.0 | 860 | 1.6443 | 0.7778 |
| 0.1338 | 5.0 | 1075 | 1.6931 | 0.7111 |
| 0.0106 | 6.0 | 1290 | 2.4757 | 0.6667 |
| 0.049 | 7.0 | 1505 | 2.6204 | 0.6889 |
| 0.0012 | 8.0 | 1720 | 1.8192 | 0.7333 |
| 0.0005 | 9.0 | 1935 | 1.7811 | 0.7556 |
| 0.0005 | 10.0 | 2150 | 2.2694 | 0.6889 |
| 0.0153 | 11.0 | 2365 | 1.6459 | 0.7333 |
| 0.0005 | 12.0 | 2580 | 1.8151 | 0.7778 |
| 0.0072 | 13.0 | 2795 | 1.9954 | 0.7556 |
| 0.0 | 14.0 | 3010 | 2.3490 | 0.7778 |
| 0.0073 | 15.0 | 3225 | 2.3310 | 0.7556 |
| 0.0002 | 16.0 | 3440 | 2.4489 | 0.6667 |
| 0.0001 | 17.0 | 3655 | 2.8003 | 0.6222 |
| 0.0 | 18.0 | 3870 | 2.6717 | 0.7333 |
| 0.0 | 19.0 | 4085 | 2.6848 | 0.7333 |
| 0.0 | 20.0 | 4300 | 2.6999 | 0.7333 |
| 0.0 | 21.0 | 4515 | 2.7166 | 0.7333 |
| 0.0 | 22.0 | 4730 | 2.7339 | 0.7333 |
| 0.0 | 23.0 | 4945 | 2.7519 | 0.7333 |
| 0.0 | 24.0 | 5160 | 2.7709 | 0.7333 |
| 0.0 | 25.0 | 5375 | 2.7907 | 0.7333 |
| 0.0 | 26.0 | 5590 | 2.8115 | 0.7333 |
| 0.0 | 27.0 | 5805 | 2.8327 | 0.7333 |
| 0.0 | 28.0 | 6020 | 2.8548 | 0.7333 |
| 0.0 | 29.0 | 6235 | 2.8773 | 0.7333 |
| 0.0 | 30.0 | 6450 | 2.9001 | 0.7333 |
| 0.0 | 31.0 | 6665 | 2.9234 | 0.7333 |
| 0.0 | 32.0 | 6880 | 2.9473 | 0.7333 |
| 0.0 | 33.0 | 7095 | 2.9712 | 0.7333 |
| 0.0 | 34.0 | 7310 | 2.9955 | 0.7333 |
| 0.0 | 35.0 | 7525 | 3.0198 | 0.7333 |
| 0.0 | 36.0 | 7740 | 3.0443 | 0.7333 |
| 0.0 | 37.0 | 7955 | 3.0682 | 0.7333 |
| 0.0 | 38.0 | 8170 | 3.0917 | 0.7333 |
| 0.0 | 39.0 | 8385 | 3.1162 | 0.7333 |
| 0.0 | 40.0 | 8600 | 3.1397 | 0.7333 |
| 0.0 | 41.0 | 8815 | 3.1619 | 0.7333 |
| 0.0 | 42.0 | 9030 | 3.1849 | 0.7333 |
| 0.0 | 43.0 | 9245 | 3.2057 | 0.7333 |
| 0.0 | 44.0 | 9460 | 3.2253 | 0.7333 |
| 0.0 | 45.0 | 9675 | 3.2434 | 0.7333 |
| 0.0 | 46.0 | 9890 | 3.2592 | 0.7333 |
| 0.0 | 47.0 | 10105 | 3.2727 | 0.7333 |
| 0.0 | 48.0 | 10320 | 3.2833 | 0.7333 |
| 0.0 | 49.0 | 10535 | 3.2902 | 0.7333 |
| 0.0 | 50.0 | 10750 | 3.2920 | 0.7333 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Anwaarma/S02
|
Anwaarma
| 2023-12-24T16:52:12Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:Anwaarma/Merged-Server-praj",
"base_model:finetune:Anwaarma/Merged-Server-praj",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-24T16:42:29Z |
---
base_model: Anwaarma/Merged-Server-praj
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: S02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S02
This model is a fine-tuned version of [Anwaarma/Merged-Server-praj](https://huggingface.co/Anwaarma/Merged-Server-praj) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5643
- Accuracy: 0.82
- F1: 0.9011
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.0 | 50 | 0.5790 | 0.6 | 0.5992 |
| No log | 0.01 | 100 | 0.5691 | 0.65 | 0.6505 |
| No log | 0.01 | 150 | 0.5678 | 0.65 | 0.6505 |
| No log | 0.01 | 200 | 0.5621 | 0.68 | 0.6773 |
| No log | 0.02 | 250 | 0.5666 | 0.63 | 0.6303 |
| No log | 0.02 | 300 | 0.5721 | 0.65 | 0.6463 |
| No log | 0.02 | 350 | 0.5533 | 0.63 | 0.6260 |
| No log | 0.03 | 400 | 0.5614 | 0.62 | 0.6105 |
| No log | 0.03 | 450 | 0.5756 | 0.62 | 0.6181 |
| 0.5985 | 0.03 | 500 | 0.5666 | 0.6 | 0.5947 |
| 0.5985 | 0.04 | 550 | 0.5613 | 0.64 | 0.6406 |
| 0.5985 | 0.04 | 600 | 0.5541 | 0.63 | 0.6306 |
| 0.5985 | 0.04 | 650 | 0.5571 | 0.62 | 0.6192 |
| 0.5985 | 0.05 | 700 | 0.5536 | 0.62 | 0.6192 |
| 0.5985 | 0.05 | 750 | 0.5614 | 0.63 | 0.6306 |
| 0.5985 | 0.05 | 800 | 0.5667 | 0.63 | 0.6297 |
| 0.5985 | 0.06 | 850 | 0.5466 | 0.66 | 0.6600 |
| 0.5985 | 0.06 | 900 | 0.5532 | 0.66 | 0.6593 |
| 0.5985 | 0.06 | 950 | 0.5482 | 0.67 | 0.6630 |
| 0.5855 | 0.07 | 1000 | 0.5837 | 0.63 | 0.6220 |
| 0.5855 | 0.07 | 1050 | 0.5368 | 0.67 | 0.6705 |
| 0.5855 | 0.07 | 1100 | 0.5793 | 0.62 | 0.6167 |
| 0.5855 | 0.08 | 1150 | 0.5694 | 0.63 | 0.6276 |
| 0.5855 | 0.08 | 1200 | 0.5520 | 0.63 | 0.6306 |
| 0.5855 | 0.09 | 1250 | 0.5572 | 0.66 | 0.6593 |
| 0.5855 | 0.09 | 1300 | 0.5706 | 0.62 | 0.6150 |
| 0.5855 | 0.09 | 1350 | 0.5694 | 0.66 | 0.6593 |
| 0.5855 | 0.1 | 1400 | 0.5559 | 0.65 | 0.6497 |
| 0.5855 | 0.1 | 1450 | 0.5515 | 0.67 | 0.6705 |
| 0.5777 | 0.1 | 1500 | 0.5447 | 0.64 | 0.6393 |
| 0.5777 | 0.11 | 1550 | 0.5453 | 0.65 | 0.6502 |
| 0.5777 | 0.11 | 1600 | 0.5575 | 0.64 | 0.6400 |
| 0.5777 | 0.11 | 1650 | 0.5498 | 0.66 | 0.6584 |
| 0.5777 | 0.12 | 1700 | 0.5620 | 0.66 | 0.6604 |
| 0.5777 | 0.12 | 1750 | 0.5734 | 0.67 | 0.6702 |
| 0.5777 | 0.12 | 1800 | 0.5561 | 0.66 | 0.6593 |
| 0.5777 | 0.13 | 1850 | 0.5376 | 0.67 | 0.6649 |
| 0.5777 | 0.13 | 1900 | 0.5652 | 0.65 | 0.6505 |
| 0.5777 | 0.13 | 1950 | 0.5414 | 0.67 | 0.6689 |
| 0.575 | 0.14 | 2000 | 0.5340 | 0.67 | 0.6665 |
| 0.575 | 0.14 | 2050 | 0.5393 | 0.68 | 0.6794 |
| 0.575 | 0.14 | 2100 | 0.5253 | 0.7 | 0.6994 |
| 0.575 | 0.15 | 2150 | 0.5334 | 0.69 | 0.6834 |
| 0.575 | 0.15 | 2200 | 0.5395 | 0.68 | 0.6773 |
| 0.575 | 0.15 | 2250 | 0.5426 | 0.65 | 0.6446 |
| 0.575 | 0.16 | 2300 | 0.5523 | 0.64 | 0.6370 |
| 0.575 | 0.16 | 2350 | 0.5378 | 0.68 | 0.6804 |
| 0.575 | 0.16 | 2400 | 0.5375 | 0.67 | 0.6649 |
| 0.575 | 0.17 | 2450 | 0.5378 | 0.68 | 0.6742 |
| 0.556 | 0.17 | 2500 | 0.5491 | 0.69 | 0.6867 |
| 0.556 | 0.17 | 2550 | 0.5347 | 0.66 | 0.6517 |
| 0.556 | 0.18 | 2600 | 0.5325 | 0.69 | 0.6852 |
| 0.556 | 0.18 | 2650 | 0.5490 | 0.68 | 0.6794 |
| 0.556 | 0.18 | 2700 | 0.5313 | 0.7 | 0.7005 |
| 0.556 | 0.19 | 2750 | 0.5451 | 0.65 | 0.6314 |
| 0.556 | 0.19 | 2800 | 0.5506 | 0.64 | 0.6312 |
| 0.556 | 0.19 | 2850 | 0.5539 | 0.65 | 0.6497 |
| 0.556 | 0.2 | 2900 | 0.5601 | 0.66 | 0.6604 |
| 0.556 | 0.2 | 2950 | 0.5530 | 0.67 | 0.6705 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_RandomError1.0_Seed105
|
behzadnet
| 2023-12-24T16:48:22Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-24T16:48:18Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_RandomError1.0_Seed105
|
behzadnet
| 2023-12-24T16:48:12Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-24T16:48:05Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
hkivancoral/hushem_40x_deit_base_rms_00001_fold1
|
hkivancoral
| 2023-12-24T16:46:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T16:08:29Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_rms_00001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_rms_00001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1476
- Accuracy: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0061 | 1.0 | 215 | 0.5807 | 0.8667 |
| 0.0003 | 2.0 | 430 | 0.6211 | 0.8444 |
| 0.0001 | 3.0 | 645 | 0.8059 | 0.8222 |
| 0.0 | 4.0 | 860 | 0.8142 | 0.8444 |
| 0.0 | 5.0 | 1075 | 0.8755 | 0.8222 |
| 0.0 | 6.0 | 1290 | 0.9063 | 0.8444 |
| 0.0 | 7.0 | 1505 | 0.9620 | 0.8667 |
| 0.0 | 8.0 | 1720 | 0.9896 | 0.8667 |
| 0.0 | 9.0 | 1935 | 1.0818 | 0.8667 |
| 0.0 | 10.0 | 2150 | 1.1238 | 0.8667 |
| 0.0 | 11.0 | 2365 | 1.1782 | 0.8667 |
| 0.0 | 12.0 | 2580 | 1.2105 | 0.8667 |
| 0.0 | 13.0 | 2795 | 1.2229 | 0.8667 |
| 0.0 | 14.0 | 3010 | 1.2497 | 0.8667 |
| 0.0 | 15.0 | 3225 | 1.2395 | 0.8667 |
| 0.0 | 16.0 | 3440 | 1.2297 | 0.8889 |
| 0.0 | 17.0 | 3655 | 1.2382 | 0.8889 |
| 0.0 | 18.0 | 3870 | 1.2316 | 0.8667 |
| 0.0 | 19.0 | 4085 | 1.2222 | 0.8889 |
| 0.0 | 20.0 | 4300 | 1.2098 | 0.8889 |
| 0.0 | 21.0 | 4515 | 1.2108 | 0.8889 |
| 0.0 | 22.0 | 4730 | 1.2160 | 0.8667 |
| 0.0 | 23.0 | 4945 | 1.1914 | 0.8889 |
| 0.0 | 24.0 | 5160 | 1.2067 | 0.8667 |
| 0.0 | 25.0 | 5375 | 1.1881 | 0.8667 |
| 0.0 | 26.0 | 5590 | 1.1754 | 0.8667 |
| 0.0 | 27.0 | 5805 | 1.1838 | 0.8667 |
| 0.0 | 28.0 | 6020 | 1.1945 | 0.8444 |
| 0.0 | 29.0 | 6235 | 1.1919 | 0.8444 |
| 0.0 | 30.0 | 6450 | 1.1709 | 0.8444 |
| 0.0 | 31.0 | 6665 | 1.1710 | 0.8444 |
| 0.0 | 32.0 | 6880 | 1.1725 | 0.8444 |
| 0.0 | 33.0 | 7095 | 1.1648 | 0.8444 |
| 0.0 | 34.0 | 7310 | 1.1652 | 0.8444 |
| 0.0 | 35.0 | 7525 | 1.1685 | 0.8444 |
| 0.0 | 36.0 | 7740 | 1.1632 | 0.8444 |
| 0.0 | 37.0 | 7955 | 1.1596 | 0.8667 |
| 0.0 | 38.0 | 8170 | 1.1545 | 0.8667 |
| 0.0 | 39.0 | 8385 | 1.1576 | 0.8444 |
| 0.0 | 40.0 | 8600 | 1.1585 | 0.8667 |
| 0.0 | 41.0 | 8815 | 1.1448 | 0.8667 |
| 0.0 | 42.0 | 9030 | 1.1428 | 0.8667 |
| 0.0 | 43.0 | 9245 | 1.1526 | 0.8667 |
| 0.0 | 44.0 | 9460 | 1.1466 | 0.8667 |
| 0.0 | 45.0 | 9675 | 1.1454 | 0.8667 |
| 0.0 | 46.0 | 9890 | 1.1467 | 0.8667 |
| 0.0 | 47.0 | 10105 | 1.1498 | 0.8667 |
| 0.0 | 48.0 | 10320 | 1.1458 | 0.8667 |
| 0.0 | 49.0 | 10535 | 1.1472 | 0.8667 |
| 0.0 | 50.0 | 10750 | 1.1476 | 0.8667 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
ntc-ai/SDXL-LoRA-slider.glowing-white-eyes
|
ntc-ai
| 2023-12-24T16:45:29Z | 48 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2023-12-24T16:45:25Z |
---
language:
- en
thumbnail: "images/evaluate/glowing white eyes...regular eye color/glowing white eyes_17_3.0.png"
widget:
- text: glowing white eyes
output:
url: images/glowing white eyes_17_3.0.png
- text: glowing white eyes
output:
url: images/glowing white eyes_19_3.0.png
- text: glowing white eyes
output:
url: images/glowing white eyes_20_3.0.png
- text: glowing white eyes
output:
url: images/glowing white eyes_21_3.0.png
- text: glowing white eyes
output:
url: images/glowing white eyes_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "glowing white eyes"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - glowing white eyes (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/glowing white eyes_17_-3.0.png" width=256 height=256 /> | <img src="images/glowing white eyes_17_0.0.png" width=256 height=256 /> | <img src="images/glowing white eyes_17_3.0.png" width=256 height=256 /> |
| <img src="images/glowing white eyes_19_-3.0.png" width=256 height=256 /> | <img src="images/glowing white eyes_19_0.0.png" width=256 height=256 /> | <img src="images/glowing white eyes_19_3.0.png" width=256 height=256 /> |
| <img src="images/glowing white eyes_20_-3.0.png" width=256 height=256 /> | <img src="images/glowing white eyes_20_0.0.png" width=256 height=256 /> | <img src="images/glowing white eyes_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
glowing white eyes
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.glowing-white-eyes', weight_name='glowing white eyes.safetensors', adapter_name="glowing white eyes")
# Activate the LoRA
pipe.set_adapters(["glowing white eyes"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, glowing white eyes"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 590+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
TheBloke/typhoon-7B-AWQ
|
TheBloke
| 2023-12-24T16:40:21Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"pretrained",
"th",
"arxiv:2312.13951",
"base_model:scb10x/typhoon-7b",
"base_model:quantized:scb10x/typhoon-7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2023-12-24T16:25:16Z |
---
base_model: scb10x/typhoon-7b
inference: false
language:
- th
library_name: transformers
license: apache-2.0
model_creator: SCB 10X
model_name: Typhoon 7B
model_type: mistral
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: TheBloke
tags:
- pretrained
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Typhoon 7B - AWQ
- Model creator: [SCB 10X](https://huggingface.co/scb10x)
- Original model: [Typhoon 7B](https://huggingface.co/scb10x/typhoon-7b)
<!-- description start -->
## Description
This repo contains AWQ model files for [SCB 10X's Typhoon 7B](https://huggingface.co/scb10x/typhoon-7b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/typhoon-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/typhoon-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/typhoon-7B-GGUF)
* [SCB 10X's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/scb10x/typhoon-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/typhoon-7B-AWQ/tree/main) | 4 | 128 | [All Thai](https://huggingface.co/datasets/pbwt/all-thai/viewer/) | 4096 | 4.20 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/typhoon-7B-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `typhoon-7B-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/typhoon-7B-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''{prompt}
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/typhoon-7B-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/typhoon-7B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/typhoon-7B-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: SCB 10X's Typhoon 7B
# Typhoon-7B: Thai Large Language Model (Pretrained)
**Typhoon-7B** is a *pretrained* Thai 🇹🇭 large language model with 7 billion parameters, and it is based on Mistral-7B.
**Typhoon-7B** outperforms all open-source Thai language models at the time of writing as evaluated on Thai examination benchmarks, and its instruction-tuned variant achieves the best results in instruction-following tasks. Also, its performance in Thai is on par with GPT-3.5 while being 2.62 times more efficient in tokenizing Thai text.
**This is not an instruction-tuned model** - It may not be able to follow human instructions without using one/few-shot learning or instruction fine-tuning. The model does not have any moderation mechanisms, and may generate harmful or inappropriate responses.
The Instruct model (chat-model) will be released soon.
<div align="center">
<img src="https://storage.googleapis.com/scb10x-ai-lab-public/assets/typhoon_benchmark.png" alt="Typhoon benchmark" width="100%" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</div>
For full details of this model, please read our [paper](https://arxiv.org/abs/2312.13951).
## Model Description
- **Model type**: A 7B pretrained decoder-only model
- **Requirement**: transformers 4.34.0 or newer.
- **Primary Language(s)**: Thai 🇹🇭 and English 🇬🇧
- **License**: Apache-2.0 (Commercial)
## Performance on Thai Benchmark
| **Model** | **ONET** | **IC** | **TGAT** | **TPAT-1** | **A-Level** |
|---------------------|----------|--------|----------|------------|-------------|
| Typhoon-7B | 0.379 | 0.393 | 0.700 | 0.414 | 0.324 |
| SeaLLM-7B | 0.342 | 0.256 | 0.589 | 0.336 | 0.305 |
| OpenThaiGPT-beta-7B | 0.180 | 0.278 | 0.411 | 0.319 | 0.243 |
| WangChanGLM | 0.192 | 0.271 | 0.167 | 0.172 | 0.175 |
| SEA-LION-7B | 0.179 | 0.290 | 0.244 | 0.198 | 0.175 |
| Avg. Human | 0.318 | - | 0.472 | 0.406 | - |
## Intended Uses & Limitations
This model is a pretrained base model. Thus, it may not be able to follow human instructions without using one/few-shot learning or instruction fine-tuning. The model does not have any moderation mechanisms, and may generate harmful or inappropriate responses.
## SCB10X AI Team
- Kunat Pipatanakul, Phatrasek Jirabovonvisut, Potsawee Manakul, Sittipong Sripaisarnmongkol, Ruangsak Patomwong, Pathomporn Chokchainant, Kasima Tharnpipitchai
- If you find Typhoon-7B useful for your work, please cite it using:
```
@article{pipatanakul2023typhoon,
title={Typhoon: Thai Large Language Models},
author={Kunat Pipatanakul and Phatrasek Jirabovonvisut and Potsawee Manakul and Sittipong Sripaisarnmongkol and Ruangsak Patomwong and Pathomporn Chokchainant and Kasima Tharnpipitchai},
year={2023},
journal={arXiv preprint arXiv:2312.13951},
url={https://arxiv.org/abs/2312.13951}
}
```
## Contact Us
- E-mail: [email protected]
|
Anwaarma/S04
|
Anwaarma
| 2023-12-24T16:37:36Z | 13 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:Anwaarma/Merged-Server-praj",
"base_model:finetune:Anwaarma/Merged-Server-praj",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-24T16:27:38Z |
---
base_model: Anwaarma/Merged-Server-praj
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: S04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S04
This model is a fine-tuned version of [Anwaarma/Merged-Server-praj](https://huggingface.co/Anwaarma/Merged-Server-praj) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5249
- Accuracy: 0.71
- F1: 0.8304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.01 | 50 | 0.6296 | 0.65 | 0.6505 |
| No log | 0.01 | 100 | 0.6280 | 0.67 | 0.6697 |
| No log | 0.02 | 150 | 0.6210 | 0.67 | 0.6665 |
| No log | 0.03 | 200 | 0.6583 | 0.65 | 0.6505 |
| No log | 0.03 | 250 | 0.6650 | 0.65 | 0.6505 |
| No log | 0.04 | 300 | 0.6613 | 0.67 | 0.6697 |
| No log | 0.04 | 350 | 0.6663 | 0.65 | 0.6502 |
| No log | 0.05 | 400 | 0.6704 | 0.67 | 0.6702 |
| No log | 0.06 | 450 | 0.6570 | 0.68 | 0.6794 |
| 0.6123 | 0.06 | 500 | 0.6430 | 0.65 | 0.6502 |
| 0.6123 | 0.07 | 550 | 0.6558 | 0.64 | 0.6404 |
| 0.6123 | 0.08 | 600 | 0.6662 | 0.64 | 0.64 |
| 0.6123 | 0.08 | 650 | 0.6547 | 0.64 | 0.6406 |
| 0.6123 | 0.09 | 700 | 0.6407 | 0.66 | 0.6605 |
| 0.6123 | 0.09 | 750 | 0.6238 | 0.66 | 0.6605 |
| 0.6123 | 0.1 | 800 | 0.6223 | 0.68 | 0.6794 |
| 0.6123 | 0.11 | 850 | 0.6006 | 0.66 | 0.6604 |
| 0.6123 | 0.11 | 900 | 0.6294 | 0.68 | 0.6773 |
| 0.6123 | 0.12 | 950 | 0.6195 | 0.66 | 0.66 |
| 0.6014 | 0.13 | 1000 | 0.6119 | 0.65 | 0.6505 |
| 0.6014 | 0.13 | 1050 | 0.6230 | 0.67 | 0.6702 |
| 0.6014 | 0.14 | 1100 | 0.6410 | 0.69 | 0.6905 |
| 0.6014 | 0.14 | 1150 | 0.6306 | 0.67 | 0.6705 |
| 0.6014 | 0.15 | 1200 | 0.6476 | 0.7 | 0.6994 |
| 0.6014 | 0.16 | 1250 | 0.6244 | 0.67 | 0.6705 |
| 0.6014 | 0.16 | 1300 | 0.6078 | 0.69 | 0.6897 |
| 0.6014 | 0.17 | 1350 | 0.5869 | 0.67 | 0.6705 |
| 0.6014 | 0.18 | 1400 | 0.6164 | 0.67 | 0.6665 |
| 0.6014 | 0.18 | 1450 | 0.6054 | 0.65 | 0.6505 |
| 0.5906 | 0.19 | 1500 | 0.5947 | 0.67 | 0.6705 |
| 0.5906 | 0.19 | 1550 | 0.5765 | 0.69 | 0.6905 |
| 0.5906 | 0.2 | 1600 | 0.5677 | 0.69 | 0.6905 |
| 0.5906 | 0.21 | 1650 | 0.5828 | 0.7 | 0.7005 |
| 0.5906 | 0.21 | 1700 | 0.6249 | 0.67 | 0.6689 |
| 0.5906 | 0.22 | 1750 | 0.5833 | 0.69 | 0.6905 |
| 0.5906 | 0.23 | 1800 | 0.5838 | 0.68 | 0.6804 |
| 0.5906 | 0.23 | 1850 | 0.5923 | 0.7 | 0.7004 |
| 0.5906 | 0.24 | 1900 | 0.5749 | 0.69 | 0.6905 |
| 0.5906 | 0.25 | 1950 | 0.5769 | 0.7 | 0.7004 |
| 0.5736 | 0.25 | 2000 | 0.5706 | 0.7 | 0.7005 |
| 0.5736 | 0.26 | 2050 | 0.5967 | 0.69 | 0.6897 |
| 0.5736 | 0.26 | 2100 | 0.5866 | 0.69 | 0.6897 |
| 0.5736 | 0.27 | 2150 | 0.5901 | 0.7 | 0.7 |
| 0.5736 | 0.28 | 2200 | 0.5771 | 0.7 | 0.7004 |
| 0.5736 | 0.28 | 2250 | 0.5616 | 0.69 | 0.6905 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
Realgon/N_roberta_imdb_padding40model
|
Realgon
| 2023-12-24T16:29:53Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-24T14:04:22Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: N_roberta_imdb_padding40model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.94952
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# N_roberta_imdb_padding40model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4922
- Accuracy: 0.9495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2081 | 1.0 | 1563 | 0.2432 | 0.9283 |
| 0.1726 | 2.0 | 3126 | 0.1724 | 0.9493 |
| 0.114 | 3.0 | 4689 | 0.2842 | 0.9384 |
| 0.0767 | 4.0 | 6252 | 0.2583 | 0.9463 |
| 0.0552 | 5.0 | 7815 | 0.3703 | 0.9420 |
| 0.0357 | 6.0 | 9378 | 0.3342 | 0.9386 |
| 0.0318 | 7.0 | 10941 | 0.3284 | 0.9462 |
| 0.0316 | 8.0 | 12504 | 0.4194 | 0.9410 |
| 0.0149 | 9.0 | 14067 | 0.4083 | 0.9483 |
| 0.0175 | 10.0 | 15630 | 0.4237 | 0.9468 |
| 0.0151 | 11.0 | 17193 | 0.4459 | 0.9457 |
| 0.0113 | 12.0 | 18756 | 0.4569 | 0.9478 |
| 0.0061 | 13.0 | 20319 | 0.4325 | 0.9482 |
| 0.0034 | 14.0 | 21882 | 0.5188 | 0.9472 |
| 0.0059 | 15.0 | 23445 | 0.4740 | 0.9484 |
| 0.0078 | 16.0 | 25008 | 0.4421 | 0.9485 |
| 0.0 | 17.0 | 26571 | 0.4819 | 0.9493 |
| 0.0035 | 18.0 | 28134 | 0.4845 | 0.9492 |
| 0.0 | 19.0 | 29697 | 0.5065 | 0.9486 |
| 0.0013 | 20.0 | 31260 | 0.4922 | 0.9495 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
TheBloke/typhoon-7B-GGUF
|
TheBloke
| 2023-12-24T16:29:53Z | 203 | 8 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"pretrained",
"text-generation",
"th",
"arxiv:2312.13951",
"base_model:scb10x/typhoon-7b",
"base_model:quantized:scb10x/typhoon-7b",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-12-24T16:25:16Z |
---
base_model: scb10x/typhoon-7b
inference: false
language:
- th
library_name: transformers
license: apache-2.0
model_creator: SCB 10X
model_name: Typhoon 7B
model_type: mistral
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: TheBloke
tags:
- pretrained
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Typhoon 7B - GGUF
- Model creator: [SCB 10X](https://huggingface.co/scb10x)
- Original model: [Typhoon 7B](https://huggingface.co/scb10x/typhoon-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [SCB 10X's Typhoon 7B](https://huggingface.co/scb10x/typhoon-7b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/typhoon-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/typhoon-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/typhoon-7B-GGUF)
* [SCB 10X's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/scb10x/typhoon-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [typhoon-7b.Q2_K.gguf](https://huggingface.co/TheBloke/typhoon-7B-GGUF/blob/main/typhoon-7b.Q2_K.gguf) | Q2_K | 2 | 3.10 GB| 5.60 GB | smallest, significant quality loss - not recommended for most purposes |
| [typhoon-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/typhoon-7B-GGUF/blob/main/typhoon-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.18 GB| 5.68 GB | very small, high quality loss |
| [typhoon-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/typhoon-7B-GGUF/blob/main/typhoon-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.54 GB| 6.04 GB | very small, high quality loss |
| [typhoon-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/typhoon-7B-GGUF/blob/main/typhoon-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.84 GB| 6.34 GB | small, substantial quality loss |
| [typhoon-7b.Q4_0.gguf](https://huggingface.co/TheBloke/typhoon-7B-GGUF/blob/main/typhoon-7b.Q4_0.gguf) | Q4_0 | 4 | 4.13 GB| 6.63 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [typhoon-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/typhoon-7B-GGUF/blob/main/typhoon-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.16 GB| 6.66 GB | small, greater quality loss |
| [typhoon-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/typhoon-7B-GGUF/blob/main/typhoon-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.39 GB| 6.89 GB | medium, balanced quality - recommended |
| [typhoon-7b.Q5_0.gguf](https://huggingface.co/TheBloke/typhoon-7B-GGUF/blob/main/typhoon-7b.Q5_0.gguf) | Q5_0 | 5 | 5.02 GB| 7.52 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [typhoon-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/typhoon-7B-GGUF/blob/main/typhoon-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.02 GB| 7.52 GB | large, low quality loss - recommended |
| [typhoon-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/typhoon-7B-GGUF/blob/main/typhoon-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.15 GB| 7.65 GB | large, very low quality loss - recommended |
| [typhoon-7b.Q6_K.gguf](https://huggingface.co/TheBloke/typhoon-7B-GGUF/blob/main/typhoon-7b.Q6_K.gguf) | Q6_K | 6 | 5.96 GB| 8.46 GB | very large, extremely low quality loss |
| [typhoon-7b.Q8_0.gguf](https://huggingface.co/TheBloke/typhoon-7B-GGUF/blob/main/typhoon-7b.Q8_0.gguf) | Q8_0 | 8 | 7.72 GB| 10.22 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/typhoon-7B-GGUF and below it, a specific filename to download, such as: typhoon-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/typhoon-7B-GGUF typhoon-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/typhoon-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/typhoon-7B-GGUF typhoon-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m typhoon-7b.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./typhoon-7b.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"{prompt}", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./typhoon-7b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: SCB 10X's Typhoon 7B
# Typhoon-7B: Thai Large Language Model (Pretrained)
**Typhoon-7B** is a *pretrained* Thai 🇹🇭 large language model with 7 billion parameters, and it is based on Mistral-7B.
**Typhoon-7B** outperforms all open-source Thai language models at the time of writing as evaluated on Thai examination benchmarks, and its instruction-tuned variant achieves the best results in instruction-following tasks. Also, its performance in Thai is on par with GPT-3.5 while being 2.62 times more efficient in tokenizing Thai text.
**This is not an instruction-tuned model** - It may not be able to follow human instructions without using one/few-shot learning or instruction fine-tuning. The model does not have any moderation mechanisms, and may generate harmful or inappropriate responses.
The Instruct model (chat-model) will be released soon.
<div align="center">
<img src="https://storage.googleapis.com/scb10x-ai-lab-public/assets/typhoon_benchmark.png" alt="Typhoon benchmark" width="100%" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</div>
For full details of this model, please read our [paper](https://arxiv.org/abs/2312.13951).
## Model Description
- **Model type**: A 7B pretrained decoder-only model
- **Requirement**: transformers 4.34.0 or newer.
- **Primary Language(s)**: Thai 🇹🇭 and English 🇬🇧
- **License**: Apache-2.0 (Commercial)
## Performance on Thai Benchmark
| **Model** | **ONET** | **IC** | **TGAT** | **TPAT-1** | **A-Level** |
|---------------------|----------|--------|----------|------------|-------------|
| Typhoon-7B | 0.379 | 0.393 | 0.700 | 0.414 | 0.324 |
| SeaLLM-7B | 0.342 | 0.256 | 0.589 | 0.336 | 0.305 |
| OpenThaiGPT-beta-7B | 0.180 | 0.278 | 0.411 | 0.319 | 0.243 |
| WangChanGLM | 0.192 | 0.271 | 0.167 | 0.172 | 0.175 |
| SEA-LION-7B | 0.179 | 0.290 | 0.244 | 0.198 | 0.175 |
| Avg. Human | 0.318 | - | 0.472 | 0.406 | - |
## Intended Uses & Limitations
This model is a pretrained base model. Thus, it may not be able to follow human instructions without using one/few-shot learning or instruction fine-tuning. The model does not have any moderation mechanisms, and may generate harmful or inappropriate responses.
## SCB10X AI Team
- Kunat Pipatanakul, Phatrasek Jirabovonvisut, Potsawee Manakul, Sittipong Sripaisarnmongkol, Ruangsak Patomwong, Pathomporn Chokchainant, Kasima Tharnpipitchai
- If you find Typhoon-7B useful for your work, please cite it using:
```
@article{pipatanakul2023typhoon,
title={Typhoon: Thai Large Language Models},
author={Kunat Pipatanakul and Phatrasek Jirabovonvisut and Potsawee Manakul and Sittipong Sripaisarnmongkol and Ruangsak Patomwong and Pathomporn Chokchainant and Kasima Tharnpipitchai},
year={2023},
journal={arXiv preprint arXiv:2312.13951},
url={https://arxiv.org/abs/2312.13951}
}
```
## Contact Us
- E-mail: [email protected]
<!-- original-model-card end -->
|
hkivancoral/hushem_40x_deit_tiny_sgd_0001_fold4
|
hkivancoral
| 2023-12-24T16:26:26Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T15:55:44Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_sgd_0001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5952380952380952
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_sgd_0001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0643
- Accuracy: 0.5952
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4577 | 1.0 | 219 | 1.6228 | 0.1667 |
| 1.3428 | 2.0 | 438 | 1.5859 | 0.1429 |
| 1.3515 | 3.0 | 657 | 1.5534 | 0.1667 |
| 1.3164 | 4.0 | 876 | 1.5241 | 0.1905 |
| 1.262 | 5.0 | 1095 | 1.5001 | 0.1905 |
| 1.2747 | 6.0 | 1314 | 1.4781 | 0.1905 |
| 1.2005 | 7.0 | 1533 | 1.4587 | 0.1905 |
| 1.2174 | 8.0 | 1752 | 1.4397 | 0.2143 |
| 1.1711 | 9.0 | 1971 | 1.4197 | 0.2143 |
| 1.1443 | 10.0 | 2190 | 1.4007 | 0.2619 |
| 1.1123 | 11.0 | 2409 | 1.3821 | 0.2857 |
| 1.164 | 12.0 | 2628 | 1.3642 | 0.4048 |
| 1.0774 | 13.0 | 2847 | 1.3471 | 0.3810 |
| 1.1066 | 14.0 | 3066 | 1.3290 | 0.3810 |
| 1.055 | 15.0 | 3285 | 1.3135 | 0.4286 |
| 1.0496 | 16.0 | 3504 | 1.2984 | 0.4048 |
| 1.112 | 17.0 | 3723 | 1.2838 | 0.4286 |
| 1.0058 | 18.0 | 3942 | 1.2696 | 0.4286 |
| 1.0363 | 19.0 | 4161 | 1.2563 | 0.4286 |
| 1.0446 | 20.0 | 4380 | 1.2431 | 0.4286 |
| 1.0301 | 21.0 | 4599 | 1.2308 | 0.4286 |
| 1.0066 | 22.0 | 4818 | 1.2182 | 0.4524 |
| 0.9188 | 23.0 | 5037 | 1.2068 | 0.4524 |
| 0.9729 | 24.0 | 5256 | 1.1969 | 0.5238 |
| 0.9215 | 25.0 | 5475 | 1.1858 | 0.5238 |
| 0.9604 | 26.0 | 5694 | 1.1753 | 0.5476 |
| 0.9173 | 27.0 | 5913 | 1.1663 | 0.5714 |
| 0.9314 | 28.0 | 6132 | 1.1573 | 0.5714 |
| 0.8654 | 29.0 | 6351 | 1.1486 | 0.5714 |
| 0.9372 | 30.0 | 6570 | 1.1410 | 0.5714 |
| 0.9028 | 31.0 | 6789 | 1.1331 | 0.5714 |
| 0.9732 | 32.0 | 7008 | 1.1254 | 0.5714 |
| 0.9146 | 33.0 | 7227 | 1.1186 | 0.5714 |
| 0.8712 | 34.0 | 7446 | 1.1126 | 0.5714 |
| 0.8981 | 35.0 | 7665 | 1.1068 | 0.5714 |
| 0.8626 | 36.0 | 7884 | 1.1011 | 0.5714 |
| 0.884 | 37.0 | 8103 | 1.0956 | 0.5714 |
| 0.9119 | 38.0 | 8322 | 1.0906 | 0.5714 |
| 0.8378 | 39.0 | 8541 | 1.0862 | 0.5714 |
| 0.8095 | 40.0 | 8760 | 1.0823 | 0.5952 |
| 0.9067 | 41.0 | 8979 | 1.0785 | 0.5952 |
| 0.874 | 42.0 | 9198 | 1.0755 | 0.5714 |
| 0.8784 | 43.0 | 9417 | 1.0728 | 0.5714 |
| 0.8408 | 44.0 | 9636 | 1.0704 | 0.5714 |
| 0.8315 | 45.0 | 9855 | 1.0684 | 0.5714 |
| 0.8598 | 46.0 | 10074 | 1.0667 | 0.5714 |
| 0.8452 | 47.0 | 10293 | 1.0654 | 0.5952 |
| 0.863 | 48.0 | 10512 | 1.0647 | 0.5952 |
| 0.8292 | 49.0 | 10731 | 1.0643 | 0.5952 |
| 0.7869 | 50.0 | 10950 | 1.0643 | 0.5952 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Anwaarma/S04-PC
|
Anwaarma
| 2023-12-24T16:24:56Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:Anwaarma/Merged-Server-praj",
"base_model:finetune:Anwaarma/Merged-Server-praj",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-24T16:16:44Z |
---
base_model: Anwaarma/Merged-Server-praj
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: S04-PC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S04-PC
This model is a fine-tuned version of [Anwaarma/Merged-Server-praj](https://huggingface.co/Anwaarma/Merged-Server-praj) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5103
- Accuracy: 0.68
- F1: 0.8095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.01 | 50 | 0.6565 | 0.61 | 0.6017 |
| No log | 0.01 | 100 | 0.6527 | 0.66 | 0.6604 |
| No log | 0.02 | 150 | 0.6491 | 0.63 | 0.6306 |
| No log | 0.03 | 200 | 0.6530 | 0.65 | 0.6488 |
| No log | 0.03 | 250 | 0.6844 | 0.66 | 0.6605 |
| No log | 0.04 | 300 | 0.6677 | 0.67 | 0.6705 |
| No log | 0.04 | 350 | 0.6830 | 0.65 | 0.6505 |
| No log | 0.05 | 400 | 0.6547 | 0.63 | 0.6297 |
| No log | 0.06 | 450 | 0.6579 | 0.63 | 0.6306 |
| 0.6131 | 0.06 | 500 | 0.6316 | 0.62 | 0.62 |
| 0.6131 | 0.07 | 550 | 0.6677 | 0.65 | 0.6505 |
| 0.6131 | 0.08 | 600 | 0.6725 | 0.68 | 0.6804 |
| 0.6131 | 0.08 | 650 | 0.6304 | 0.66 | 0.6600 |
| 0.6131 | 0.09 | 700 | 0.6332 | 0.67 | 0.6705 |
| 0.6131 | 0.09 | 750 | 0.5832 | 0.68 | 0.6804 |
| 0.6131 | 0.1 | 800 | 0.5870 | 0.68 | 0.6794 |
| 0.6131 | 0.11 | 850 | 0.5742 | 0.7 | 0.6994 |
| 0.6131 | 0.11 | 900 | 0.5861 | 0.68 | 0.6794 |
| 0.6131 | 0.12 | 950 | 0.5922 | 0.68 | 0.6794 |
| 0.5945 | 0.13 | 1000 | 0.5769 | 0.67 | 0.6697 |
| 0.5945 | 0.13 | 1050 | 0.6237 | 0.7 | 0.7004 |
| 0.5945 | 0.14 | 1100 | 0.6270 | 0.69 | 0.6897 |
| 0.5945 | 0.14 | 1150 | 0.6026 | 0.65 | 0.6497 |
| 0.5945 | 0.15 | 1200 | 0.6483 | 0.69 | 0.6902 |
| 0.5945 | 0.16 | 1250 | 0.6043 | 0.65 | 0.6502 |
| 0.5945 | 0.16 | 1300 | 0.5933 | 0.69 | 0.6897 |
| 0.5945 | 0.17 | 1350 | 0.5837 | 0.69 | 0.6902 |
| 0.5945 | 0.18 | 1400 | 0.6172 | 0.68 | 0.6784 |
| 0.5945 | 0.18 | 1450 | 0.5930 | 0.69 | 0.6902 |
| 0.5822 | 0.19 | 1500 | 0.5816 | 0.69 | 0.6902 |
| 0.5822 | 0.19 | 1550 | 0.5893 | 0.69 | 0.6902 |
| 0.5822 | 0.2 | 1600 | 0.5926 | 0.69 | 0.6905 |
| 0.5822 | 0.21 | 1650 | 0.5815 | 0.67 | 0.6705 |
| 0.5822 | 0.21 | 1700 | 0.6059 | 0.67 | 0.6689 |
| 0.5822 | 0.22 | 1750 | 0.5986 | 0.68 | 0.6794 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
NouRed/Med-Phi-2-QLoRa
|
NouRed
| 2023-12-24T16:24:08Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"region:us"
] | null | 2023-12-24T01:01:35Z |
---
library_name: peft
base_model: microsoft/phi-2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
vitax10/fewshot_model
|
vitax10
| 2023-12-24T16:18:38Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-24T15:58:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: fewshot_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fewshot_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.1+cpu
- Tokenizers 0.13.3
|
YeungNLP/firefly-mixtral-8x7b
|
YeungNLP
| 2023-12-24T16:07:14Z | 1,507 | 19 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-18T02:18:50Z |
---
license: apache-2.0
language:
- en
---
This model is finetuend based on "mistralai/Mixtral-8x7B-v0.1" with [Firefly](https://github.com/yangjianxin1/Firefly) and 48k data from ultrachat.
## Evaluation
Though we finetune with only 48k data, our model can also achieve excellent performance.
| Model | Open LLM Leaderboard |
|------------------------------------------------------------------------------------------------|---------------------------------------------|
| Qwen-72B | 73.6 |
| Mixtral-8x7B-Instruct-v0.1 | 72.62 |
|**Firefly-Mixtral-8x7B**|**70.34**|
|Yi-34B|69.42|
|Mixtral-8x7B-v0.1|68.42|
|Llama2-65B-Chat|67.87|
|Qwen-14B|65.86|
|Vicuna-33B-v1.3 |58.54|
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name_or_path = 'YeungNLP/firefly-mixtral-8x7b'
max_new_tokens = 500
top_p = 0.9
temperature = 0.35
repetition_penalty = 1.0
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
trust_remote_code=True,
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
device_map='auto'
)
model = model.eval()
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
text = "Compose an engaging travel blog post about a recent trip to Hawaii, highlighting cultural experiences and must-see attractions."
inst_begin_tokens = tokenizer.encode('[INST]', add_special_tokens=False)
inst_end_tokens = tokenizer.encode('[/INST]', add_special_tokens=False)
human_tokens = tokenizer.encode(text, add_special_tokens=False)
input_ids = [tokenizer.bos_token_id] + inst_begin_tokens + human_tokens + inst_end_tokens
# input_ids = human_tokens
input_ids = torch.tensor([input_ids], dtype=torch.long).cuda()
with torch.no_grad():
outputs = model.generate(
input_ids=input_ids, max_new_tokens=max_new_tokens, do_sample=True,
top_p=top_p, temperature=temperature, repetition_penalty=repetition_penalty,
eos_token_id=tokenizer.eos_token_id
)
outputs = outputs.tolist()[0][len(input_ids[0]):]
response = tokenizer.decode(outputs)
response = response.strip().replace(tokenizer.eos_token, "").strip()
print("Chatbot:{}".format(response))
```
|
grissi/dqn-SpaceInvadersNoFrameskip-v4
|
grissi
| 2023-12-24T16:01:33Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-24T16:01:04Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 497.00 +/- 227.91
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga grissi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga grissi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga grissi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Anwaarma/INT03
|
Anwaarma
| 2023-12-24T15:58:19Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:prajjwal1/bert-tiny",
"base_model:finetune:prajjwal1/bert-tiny",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-24T15:56:44Z |
---
license: mit
base_model: prajjwal1/bert-tiny
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: INT03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# INT03
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0169
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.0 | 50 | 0.6888 | 0.62 | 0.5657 |
| No log | 0.01 | 100 | 0.6817 | 0.66 | 0.5965 |
| No log | 0.01 | 150 | 0.6004 | 0.86 | 0.8553 |
| No log | 0.02 | 200 | 0.4136 | 0.87 | 0.8651 |
| No log | 0.02 | 250 | 0.3550 | 0.89 | 0.8889 |
| No log | 0.03 | 300 | 0.3241 | 0.89 | 0.8889 |
| No log | 0.03 | 350 | 0.3144 | 0.89 | 0.8889 |
| No log | 0.04 | 400 | 0.3146 | 0.89 | 0.8889 |
| No log | 0.04 | 450 | 0.2985 | 0.89 | 0.8889 |
| 0.5219 | 0.05 | 500 | 0.2604 | 0.92 | 0.92 |
| 0.5219 | 0.05 | 550 | 0.2242 | 0.92 | 0.9202 |
| 0.5219 | 0.06 | 600 | 0.1976 | 0.92 | 0.9197 |
| 0.5219 | 0.06 | 650 | 0.1800 | 0.93 | 0.9302 |
| 0.5219 | 0.07 | 700 | 0.1685 | 0.93 | 0.9302 |
| 0.5219 | 0.07 | 750 | 0.1706 | 0.93 | 0.9303 |
| 0.5219 | 0.08 | 800 | 0.1532 | 0.93 | 0.9303 |
| 0.5219 | 0.08 | 850 | 0.1411 | 0.93 | 0.9303 |
| 0.5219 | 0.09 | 900 | 0.1070 | 0.98 | 0.9799 |
| 0.5219 | 0.09 | 950 | 0.0970 | 0.96 | 0.9601 |
| 0.2869 | 0.1 | 1000 | 0.0775 | 0.96 | 0.9601 |
| 0.2869 | 0.1 | 1050 | 0.0789 | 0.97 | 0.9701 |
| 0.2869 | 0.11 | 1100 | 0.0546 | 0.98 | 0.98 |
| 0.2869 | 0.11 | 1150 | 0.0789 | 0.98 | 0.9800 |
| 0.2869 | 0.12 | 1200 | 0.0425 | 0.99 | 0.9900 |
| 0.2869 | 0.12 | 1250 | 0.0443 | 0.99 | 0.9900 |
| 0.2869 | 0.13 | 1300 | 0.0340 | 0.99 | 0.9900 |
| 0.2869 | 0.13 | 1350 | 0.0649 | 0.97 | 0.9700 |
| 0.2869 | 0.14 | 1400 | 0.0241 | 1.0 | 1.0 |
| 0.2869 | 0.14 | 1450 | 0.0215 | 1.0 | 1.0 |
| 0.1754 | 0.15 | 1500 | 0.0146 | 1.0 | 1.0 |
| 0.1754 | 0.15 | 1550 | 0.0125 | 1.0 | 1.0 |
| 0.1754 | 0.16 | 1600 | 0.0122 | 1.0 | 1.0 |
| 0.1754 | 0.16 | 1650 | 0.0110 | 1.0 | 1.0 |
| 0.1754 | 0.17 | 1700 | 0.0092 | 1.0 | 1.0 |
| 0.1754 | 0.17 | 1750 | 0.0117 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
yuanhuaisen/autotrain-55x6s-5uwoq
|
yuanhuaisen
| 2023-12-24T15:49:45Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:yuanhuaisen/autotrain-data-autotrain-55x6s-5uwoq",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T15:49:39Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- yuanhuaisen/autotrain-data-autotrain-55x6s-5uwoq
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.12345679012345678
f1_micro: 0.22727272727272727
f1_weighted: 0.08417508417508417
precision_macro: 0.07575757575757576
precision_micro: 0.22727272727272727
precision_weighted: 0.051652892561983466
recall_macro: 0.3333333333333333
recall_micro: 0.22727272727272727
recall_weighted: 0.22727272727272727
accuracy: 0.22727272727272727
|
iloncka/efficientformerv2_l.snap_dist_in1k_ep_20
|
iloncka
| 2023-12-24T15:46:14Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-12-24T15:38:00Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Ahmed107/hamsa-lora-v11
|
Ahmed107
| 2023-12-24T15:44:56Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-medium",
"base_model:adapter:openai/whisper-medium",
"region:us"
] | null | 2023-12-24T15:44:48Z |
---
library_name: peft
base_model: openai/whisper-medium
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
ThuyNT03/KLTN_COQE_viT5_total_SOAPL_v4
|
ThuyNT03
| 2023-12-24T15:42:02Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-24T14:54:11Z |
---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_total_SOAPL_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_total_SOAPL_v4
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
Zled/phi-bi
|
Zled
| 2023-12-24T15:37:35Z | 6 | 1 |
peft
|
[
"peft",
"safetensors",
"phi-msft",
"custom_code",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"region:us"
] | null | 2023-12-24T11:27:01Z |
---
library_name: peft
base_model: microsoft/phi-2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
microsoft/phi-2 model trained on BI55/MedText
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
hkivancoral/hushem_40x_deit_base_rms_0001_fold5
|
hkivancoral
| 2023-12-24T15:33:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T14:48:53Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_rms_0001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8536585365853658
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_rms_0001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7342
- Accuracy: 0.8537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0617 | 1.0 | 220 | 0.4998 | 0.8780 |
| 0.0015 | 2.0 | 440 | 0.4091 | 0.9024 |
| 0.067 | 3.0 | 660 | 0.7752 | 0.9024 |
| 0.0007 | 4.0 | 880 | 0.7710 | 0.8293 |
| 0.0006 | 5.0 | 1100 | 0.9905 | 0.8293 |
| 0.0109 | 6.0 | 1320 | 1.1163 | 0.8049 |
| 0.0 | 7.0 | 1540 | 1.0399 | 0.8049 |
| 0.0 | 8.0 | 1760 | 1.0747 | 0.8293 |
| 0.0 | 9.0 | 1980 | 1.1399 | 0.8537 |
| 0.0 | 10.0 | 2200 | 1.2260 | 0.8537 |
| 0.0 | 11.0 | 2420 | 1.3150 | 0.8537 |
| 0.0 | 12.0 | 2640 | 1.3880 | 0.8537 |
| 0.0 | 13.0 | 2860 | 1.4421 | 0.8537 |
| 0.0 | 14.0 | 3080 | 1.4689 | 0.8537 |
| 0.0 | 15.0 | 3300 | 1.4886 | 0.8537 |
| 0.0 | 16.0 | 3520 | 1.5214 | 0.8537 |
| 0.0 | 17.0 | 3740 | 1.5517 | 0.8537 |
| 0.0 | 18.0 | 3960 | 1.5796 | 0.8537 |
| 0.0 | 19.0 | 4180 | 1.6055 | 0.8537 |
| 0.0 | 20.0 | 4400 | 1.6255 | 0.8537 |
| 0.0 | 21.0 | 4620 | 1.6409 | 0.8537 |
| 0.0 | 22.0 | 4840 | 1.6535 | 0.8537 |
| 0.0 | 23.0 | 5060 | 1.6637 | 0.8537 |
| 0.0 | 24.0 | 5280 | 1.6723 | 0.8537 |
| 0.0 | 25.0 | 5500 | 1.6796 | 0.8537 |
| 0.0 | 26.0 | 5720 | 1.6858 | 0.8537 |
| 0.0 | 27.0 | 5940 | 1.6914 | 0.8537 |
| 0.0 | 28.0 | 6160 | 1.6963 | 0.8537 |
| 0.0 | 29.0 | 6380 | 1.7007 | 0.8537 |
| 0.0 | 30.0 | 6600 | 1.7046 | 0.8537 |
| 0.0 | 31.0 | 6820 | 1.7081 | 0.8537 |
| 0.0 | 32.0 | 7040 | 1.7112 | 0.8537 |
| 0.0 | 33.0 | 7260 | 1.7141 | 0.8537 |
| 0.0 | 34.0 | 7480 | 1.7167 | 0.8537 |
| 0.0 | 35.0 | 7700 | 1.7191 | 0.8537 |
| 0.0 | 36.0 | 7920 | 1.7212 | 0.8537 |
| 0.0 | 37.0 | 8140 | 1.7232 | 0.8537 |
| 0.0 | 38.0 | 8360 | 1.7249 | 0.8537 |
| 0.0 | 39.0 | 8580 | 1.7265 | 0.8537 |
| 0.0 | 40.0 | 8800 | 1.7279 | 0.8537 |
| 0.0 | 41.0 | 9020 | 1.7292 | 0.8537 |
| 0.0 | 42.0 | 9240 | 1.7303 | 0.8537 |
| 0.0 | 43.0 | 9460 | 1.7312 | 0.8537 |
| 0.0 | 44.0 | 9680 | 1.7321 | 0.8537 |
| 0.0 | 45.0 | 9900 | 1.7328 | 0.8537 |
| 0.0 | 46.0 | 10120 | 1.7333 | 0.8537 |
| 0.0 | 47.0 | 10340 | 1.7337 | 0.8537 |
| 0.0 | 48.0 | 10560 | 1.7340 | 0.8537 |
| 0.0 | 49.0 | 10780 | 1.7342 | 0.8537 |
| 0.0 | 50.0 | 11000 | 1.7342 | 0.8537 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Oblix/xlm-roberta-base-language-detection_ONNX
|
Oblix
| 2023-12-24T15:27:39Z | 3 | 0 |
transformers
|
[
"transformers",
"onnx",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-24T15:22:14Z |
ONNX version of the https://huggingface.co/papluca/xlm-roberta-base-language-detection model
|
hkivancoral/hushem_40x_deit_tiny_sgd_0001_fold2
|
hkivancoral
| 2023-12-24T15:25:08Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T14:55:02Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_sgd_0001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.4666666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_sgd_0001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2757
- Accuracy: 0.4667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.3997 | 1.0 | 215 | 1.4647 | 0.2444 |
| 1.3365 | 2.0 | 430 | 1.4330 | 0.2222 |
| 1.2917 | 3.0 | 645 | 1.4094 | 0.2444 |
| 1.2427 | 4.0 | 860 | 1.3927 | 0.2667 |
| 1.2441 | 5.0 | 1075 | 1.3794 | 0.2667 |
| 1.1872 | 6.0 | 1290 | 1.3684 | 0.2667 |
| 1.1795 | 7.0 | 1505 | 1.3589 | 0.3111 |
| 1.1209 | 8.0 | 1720 | 1.3504 | 0.3333 |
| 1.1403 | 9.0 | 1935 | 1.3427 | 0.3778 |
| 1.0825 | 10.0 | 2150 | 1.3360 | 0.3778 |
| 1.0205 | 11.0 | 2365 | 1.3306 | 0.3778 |
| 1.0287 | 12.0 | 2580 | 1.3256 | 0.4222 |
| 1.0526 | 13.0 | 2795 | 1.3201 | 0.4444 |
| 0.979 | 14.0 | 3010 | 1.3158 | 0.4444 |
| 1.009 | 15.0 | 3225 | 1.3119 | 0.4444 |
| 1.0242 | 16.0 | 3440 | 1.3068 | 0.4444 |
| 0.9586 | 17.0 | 3655 | 1.3041 | 0.4222 |
| 0.9705 | 18.0 | 3870 | 1.3009 | 0.4222 |
| 0.9559 | 19.0 | 4085 | 1.2993 | 0.4222 |
| 0.95 | 20.0 | 4300 | 1.2983 | 0.4444 |
| 0.9501 | 21.0 | 4515 | 1.2955 | 0.4444 |
| 0.9287 | 22.0 | 4730 | 1.2949 | 0.4444 |
| 0.8978 | 23.0 | 4945 | 1.2936 | 0.4444 |
| 0.8221 | 24.0 | 5160 | 1.2913 | 0.4444 |
| 0.8642 | 25.0 | 5375 | 1.2902 | 0.4444 |
| 0.8893 | 26.0 | 5590 | 1.2888 | 0.4444 |
| 0.8888 | 27.0 | 5805 | 1.2875 | 0.4444 |
| 0.8399 | 28.0 | 6020 | 1.2872 | 0.4444 |
| 0.8384 | 29.0 | 6235 | 1.2862 | 0.4444 |
| 0.8557 | 30.0 | 6450 | 1.2852 | 0.4444 |
| 0.8264 | 31.0 | 6665 | 1.2846 | 0.4444 |
| 0.7947 | 32.0 | 6880 | 1.2839 | 0.4222 |
| 0.7889 | 33.0 | 7095 | 1.2827 | 0.4222 |
| 0.829 | 34.0 | 7310 | 1.2822 | 0.4444 |
| 0.754 | 35.0 | 7525 | 1.2813 | 0.4444 |
| 0.7758 | 36.0 | 7740 | 1.2807 | 0.4444 |
| 0.8928 | 37.0 | 7955 | 1.2794 | 0.4444 |
| 0.734 | 38.0 | 8170 | 1.2794 | 0.4444 |
| 0.7594 | 39.0 | 8385 | 1.2785 | 0.4444 |
| 0.775 | 40.0 | 8600 | 1.2779 | 0.4444 |
| 0.7835 | 41.0 | 8815 | 1.2773 | 0.4667 |
| 0.7569 | 42.0 | 9030 | 1.2769 | 0.4667 |
| 0.7974 | 43.0 | 9245 | 1.2769 | 0.4667 |
| 0.7959 | 44.0 | 9460 | 1.2766 | 0.4667 |
| 0.8113 | 45.0 | 9675 | 1.2762 | 0.4667 |
| 0.7344 | 46.0 | 9890 | 1.2759 | 0.4667 |
| 0.7955 | 47.0 | 10105 | 1.2758 | 0.4667 |
| 0.7831 | 48.0 | 10320 | 1.2757 | 0.4667 |
| 0.7467 | 49.0 | 10535 | 1.2757 | 0.4667 |
| 0.8192 | 50.0 | 10750 | 1.2757 | 0.4667 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ
|
TheBloke
| 2023-12-24T15:23:19Z | 17 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct",
"base_model:quantized:Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-12-24T14:41:25Z |
---
base_model: Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
inference: false
license: apache-2.0
model_creator: "Ya\u011F\u0131z \xC7al\u0131k"
model_name: SauerkrautLM Una SOLAR Instruct
model_type: solar
prompt_template: '### User:
{prompt}
### Assistant:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# SauerkrautLM Una SOLAR Instruct - GPTQ
- Model creator: [Yağız Çalık](https://huggingface.co/Weyaxi)
- Original model: [SauerkrautLM Una SOLAR Instruct](https://huggingface.co/Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Yağız Çalık's SauerkrautLM Una SOLAR Instruct](https://huggingface.co/Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF)
* [Yağız Çalık's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: User-Assistant-Newlines
```
### User:
{prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 5.98 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 6.59 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 11.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 11.25 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 11.99 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 6.18 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `SauerkrautLM-UNA-SOLAR-Instruct-GPTQ`:
```shell
mkdir SauerkrautLM-UNA-SOLAR-Instruct-GPTQ
huggingface-cli download TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ --local-dir SauerkrautLM-UNA-SOLAR-Instruct-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir SauerkrautLM-UNA-SOLAR-Instruct-GPTQ
huggingface-cli download TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir SauerkrautLM-UNA-SOLAR-Instruct-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir SauerkrautLM-UNA-SOLAR-Instruct-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ --local-dir SauerkrautLM-UNA-SOLAR-Instruct-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `SauerkrautLM-UNA-SOLAR-Instruct-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''### User:
{prompt}
### Assistant:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''### User:
{prompt}
### Assistant:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Yağız Çalık's SauerkrautLM Una SOLAR Instruct

# SauerkrautLM-UNA-SOLAR-Instruct
This is the model for SauerkrautLM-UNA-SOLAR-Instruct. I used [mergekit](https://github.com/cg123/mergekit) to merge models.
# Prompt Template(s)
```
### User:
{user}
### Assistant:
{asistant}
```
# Yaml Config to reproduce
```yaml
slices:
- sources:
- model: VAGOsolutions/SauerkrautLM-SOLAR-Instruct
layer_range: [0, 48]
- model: fblgit/UNA-SOLAR-10.7B-Instruct-v1.0
layer_range: [0, 48]
merge_method: slerp
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
tokenizer_source: union
dtype: bfloat16
```
|
selinawisco/result_2
|
selinawisco
| 2023-12-24T15:19:59Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-12-07T15:03:49Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: result_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result_2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 10.7026
- Accuracy: 0.4058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 0.19 | 1.0 | 90 | 0.9620 | 0.1504 |
| 0.1041 | 1.99 | 180 | 0.9744 | 0.0753 |
| 0.0463 | 2.99 | 270 | 0.9972 | 0.0106 |
| 0.0003 | 4.0 | 361 | 0.9965 | 0.0354 |
| 0.0 | 5.0 | 451 | 0.9910 | 0.1336 |
| 0.0 | 5.99 | 541 | 0.9959 | 0.0548 |
| 0.0654 | 6.99 | 631 | 0.9952 | 0.0685 |
| 0.0617 | 8.0 | 722 | 0.9972 | 0.0307 |
| 0.0 | 9.0 | 812 | 0.9959 | 0.0454 |
| 0.0 | 9.99 | 902 | 0.9972 | 0.0382 |
| 0.0 | 10.99 | 992 | 0.9979 | 0.0365 |
| 0.0 | 11.12 | 1001 | 10.7026 | 0.4058 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
andreatorch/Reinforce-Unit4-cartPole
|
andreatorch
| 2023-12-24T15:19:11Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-24T15:18:59Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Unit4-cartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
iloncka/edgenext_xx_small.in1k_ep_20
|
iloncka
| 2023-12-24T15:14:51Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-12-24T15:11:50Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Depie/llama2-2-chat-7b-ToTTo
|
Depie
| 2023-12-24T15:14:18Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2023-12-24T15:11:34Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
npc0/chatglm3-6b-32k-int4
|
npc0
| 2023-12-24T15:12:47Z | 0 | 1 | null |
[
"glm",
"chatglm",
"ggml",
"zh",
"en",
"region:us"
] | null | 2023-11-23T11:14:31Z |
---
language:
- zh
- en
tags:
- glm
- chatglm
- ggml
---
# ChatGLM3-6B-32k-int4
介绍 (Introduction)
ChatGLM3-6B-32k 是 ChatGLM 系列最新一代的开源模型,[THUDM/chatglm3-6b](https://github.com/THUDM/ChatGLM3)
用 [ChatGLM.CPP](https://github.com/li-plus/chatglm.cpp) 基於 GGML quantize 生成 Q4_0、Q4_1 權重 weights 儲存於此倉庫。
## Performance
|Model |GGML quantize method| HDD size |
|--------------------------|--------------------|----------|
|chatglm3-32k-ggml-q4_0.bin| q4_0 | 3.51 GB |
|chatglm3-32k-ggml-q4_1.bin| q4_1 | 3.9 GB |
## Getting Started
1. Install dependency
```sh
pip install chatglm-cpp transformers
```
2. Download weight
```sh
wget https://huggingface.co/npc0/chatglm3-6b-32k-int4/resolve/main/chatglm3-32k-ggml-q4_0.bin
```
3. Code
```py
import chatglm_cpp
pipeline = chatglm_cpp.Pipeline("./chatglm3-32k-ggml-q4_0.bin")
pipeline.chat([chatglm_cpp.ChatMessage(role="user", content="你好")])
# Output: ChatMessage(role="assistant", content="你好👋!我是人工智能助手 ChatGLM-6B,很高兴见到你,欢迎问我任何问题。", tool_calls=[])
```
|
npc0/chatglm3-6b-32k-fp16
|
npc0
| 2023-12-24T15:11:55Z | 0 | 0 | null |
[
"glm",
"chatglm",
"ggml",
"zh",
"en",
"region:us"
] | null | 2023-11-27T14:38:39Z |
---
language:
- zh
- en
tags:
- glm
- chatglm
- ggml
---
# ChatGLM3-6B-32k-fp16
介绍 (Introduction)
ChatGLM3-6B-32k 是 ChatGLM 系列最新一代的开源模型,[THUDM/chatglm3-6b](https://github.com/THUDM/ChatGLM3)
用 [ChatGLM.CPP](https://github.com/li-plus/chatglm.cpp) 基於 GGML quantize 生成 f16 權重 weights 儲存於此倉庫。
## Performance
|Model |GGML quantize method| HDD size |
|--------------------------|--------------------|----------|
|chatglm3-32k-ggml-q4_0.bin| f16 | 12.5 GB |
## Getting Started
1. Install dependency
```sh
pip install chatglm-cpp transformers
```
2. Download weight
```sh
wget https://huggingface.co/npc0/chatglm3-6b-32k-fp16/resolve/main/chatglm3-32k-ggml-f16.bin
```
3. Code
```py
import chatglm_cpp
pipeline = chatglm_cpp.Pipeline("./chatglm3-32k-ggml-f16.bin")
pipeline.chat([chatglm_cpp.ChatMessage(role="user", content="你好")])
# Output: ChatMessage(role="assistant", content="你好👋!我是人工智能助手 ChatGLM-6B,很高兴见到你,欢迎问我任何问题。", tool_calls=[])
```
|
iloncka/convnextv2_pico.fcmae_ft_in1k_ep_20
|
iloncka
| 2023-12-24T14:56:20Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-12-24T14:53:24Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
hkivancoral/hushem_40x_deit_tiny_sgd_0001_fold1
|
hkivancoral
| 2023-12-24T14:54:53Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T14:23:54Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_sgd_0001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5333333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_sgd_0001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1471
- Accuracy: 0.5333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4892 | 1.0 | 215 | 1.3892 | 0.2222 |
| 1.3775 | 2.0 | 430 | 1.3751 | 0.2889 |
| 1.3266 | 3.0 | 645 | 1.3631 | 0.3111 |
| 1.2619 | 4.0 | 860 | 1.3523 | 0.3111 |
| 1.235 | 5.0 | 1075 | 1.3429 | 0.3111 |
| 1.1826 | 6.0 | 1290 | 1.3347 | 0.3778 |
| 1.2015 | 7.0 | 1505 | 1.3284 | 0.3778 |
| 1.2072 | 8.0 | 1720 | 1.3225 | 0.4 |
| 1.1254 | 9.0 | 1935 | 1.3170 | 0.4 |
| 1.1293 | 10.0 | 2150 | 1.3118 | 0.3556 |
| 1.0925 | 11.0 | 2365 | 1.3069 | 0.3778 |
| 1.0731 | 12.0 | 2580 | 1.3024 | 0.3778 |
| 1.0421 | 13.0 | 2795 | 1.2976 | 0.3778 |
| 1.0531 | 14.0 | 3010 | 1.2928 | 0.3778 |
| 1.0284 | 15.0 | 3225 | 1.2877 | 0.3778 |
| 1.0283 | 16.0 | 3440 | 1.2824 | 0.4 |
| 1.0283 | 17.0 | 3655 | 1.2767 | 0.4222 |
| 1.0038 | 18.0 | 3870 | 1.2712 | 0.4444 |
| 0.9952 | 19.0 | 4085 | 1.2654 | 0.4444 |
| 0.9413 | 20.0 | 4300 | 1.2587 | 0.4444 |
| 0.9562 | 21.0 | 4515 | 1.2529 | 0.4667 |
| 1.0163 | 22.0 | 4730 | 1.2467 | 0.4667 |
| 0.9391 | 23.0 | 4945 | 1.2401 | 0.4667 |
| 0.955 | 24.0 | 5160 | 1.2340 | 0.4889 |
| 0.9454 | 25.0 | 5375 | 1.2281 | 0.4889 |
| 0.9013 | 26.0 | 5590 | 1.2229 | 0.4889 |
| 0.8818 | 27.0 | 5805 | 1.2169 | 0.5111 |
| 0.8594 | 28.0 | 6020 | 1.2115 | 0.5111 |
| 0.8984 | 29.0 | 6235 | 1.2064 | 0.5111 |
| 0.8277 | 30.0 | 6450 | 1.2009 | 0.5111 |
| 0.8636 | 31.0 | 6665 | 1.1955 | 0.5111 |
| 0.8466 | 32.0 | 6880 | 1.1910 | 0.5111 |
| 0.8955 | 33.0 | 7095 | 1.1866 | 0.5111 |
| 0.817 | 34.0 | 7310 | 1.1825 | 0.5111 |
| 0.8132 | 35.0 | 7525 | 1.1781 | 0.5111 |
| 0.7914 | 36.0 | 7740 | 1.1742 | 0.5111 |
| 0.835 | 37.0 | 7955 | 1.1705 | 0.5111 |
| 0.8383 | 38.0 | 8170 | 1.1668 | 0.5111 |
| 0.828 | 39.0 | 8385 | 1.1638 | 0.5111 |
| 0.7822 | 40.0 | 8600 | 1.1606 | 0.5111 |
| 0.8243 | 41.0 | 8815 | 1.1580 | 0.5333 |
| 0.9371 | 42.0 | 9030 | 1.1556 | 0.5333 |
| 0.8482 | 43.0 | 9245 | 1.1533 | 0.5333 |
| 0.8054 | 44.0 | 9460 | 1.1516 | 0.5333 |
| 0.8152 | 45.0 | 9675 | 1.1501 | 0.5333 |
| 0.8013 | 46.0 | 9890 | 1.1489 | 0.5333 |
| 0.7786 | 47.0 | 10105 | 1.1481 | 0.5333 |
| 0.7918 | 48.0 | 10320 | 1.1474 | 0.5333 |
| 0.8671 | 49.0 | 10535 | 1.1471 | 0.5333 |
| 0.8286 | 50.0 | 10750 | 1.1471 | 0.5333 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
azi111/dolphin-2_2-yi-34b-465bpw-h8-exl2-cnen
|
azi111
| 2023-12-24T14:51:30Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-24T10:06:42Z |
---
license: other
license_name: li-license
license_link: LICENSE
---
实测占用超24g,24g显卡请勿使用
采用中英双语语料量化,中文效果未达到预期,仅供测试
来源模型:dolphin-2_2-yi-34b
|
azi111/dolphin-2_2-yi-34b-3bpw-h8-exl2-cnen
|
azi111
| 2023-12-24T14:50:51Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-24T13:05:37Z |
---
license: other
license_name: yi-license
license_link: LICENSE
---
采用中英双语语料量化,中文效果未达到预期,仅供测试
来源模型:dolphin-2_2-yi-34b
|
hkivancoral/hushem_40x_deit_base_rms_001_fold4
|
hkivancoral
| 2023-12-24T14:46:32Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T14:02:16Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_rms_001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8095238095238095
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_rms_001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7808
- Accuracy: 0.8095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4095 | 1.0 | 219 | 1.4091 | 0.2381 |
| 1.3846 | 2.0 | 438 | 1.3865 | 0.2381 |
| 1.2802 | 3.0 | 657 | 1.3372 | 0.2381 |
| 1.1537 | 4.0 | 876 | 1.4032 | 0.2619 |
| 1.177 | 5.0 | 1095 | 1.3147 | 0.4286 |
| 1.1719 | 6.0 | 1314 | 0.9703 | 0.6667 |
| 1.0403 | 7.0 | 1533 | 1.2271 | 0.4762 |
| 0.9188 | 8.0 | 1752 | 0.9431 | 0.5714 |
| 0.8565 | 9.0 | 1971 | 1.0056 | 0.5952 |
| 0.8519 | 10.0 | 2190 | 0.7845 | 0.6429 |
| 0.7519 | 11.0 | 2409 | 0.7049 | 0.6905 |
| 0.8514 | 12.0 | 2628 | 0.6628 | 0.7857 |
| 0.8808 | 13.0 | 2847 | 0.8006 | 0.7381 |
| 0.796 | 14.0 | 3066 | 0.7332 | 0.6905 |
| 0.7213 | 15.0 | 3285 | 0.7486 | 0.6905 |
| 0.663 | 16.0 | 3504 | 0.4390 | 0.7857 |
| 0.5845 | 17.0 | 3723 | 0.9856 | 0.5952 |
| 0.5228 | 18.0 | 3942 | 0.6588 | 0.7381 |
| 0.5581 | 19.0 | 4161 | 0.6093 | 0.8571 |
| 0.518 | 20.0 | 4380 | 0.5316 | 0.6905 |
| 0.5058 | 21.0 | 4599 | 0.7052 | 0.7381 |
| 0.453 | 22.0 | 4818 | 0.6155 | 0.7143 |
| 0.4128 | 23.0 | 5037 | 0.7141 | 0.7381 |
| 0.44 | 24.0 | 5256 | 0.6896 | 0.7619 |
| 0.3933 | 25.0 | 5475 | 0.6353 | 0.7619 |
| 0.3648 | 26.0 | 5694 | 0.7225 | 0.8095 |
| 0.2677 | 27.0 | 5913 | 0.6987 | 0.8810 |
| 0.3023 | 28.0 | 6132 | 0.8143 | 0.8333 |
| 0.332 | 29.0 | 6351 | 0.8300 | 0.8333 |
| 0.2772 | 30.0 | 6570 | 0.6339 | 0.7619 |
| 0.1878 | 31.0 | 6789 | 0.6694 | 0.8333 |
| 0.2152 | 32.0 | 7008 | 0.7930 | 0.7619 |
| 0.2378 | 33.0 | 7227 | 0.7856 | 0.7619 |
| 0.1874 | 34.0 | 7446 | 0.6614 | 0.8571 |
| 0.2043 | 35.0 | 7665 | 0.7218 | 0.8095 |
| 0.122 | 36.0 | 7884 | 1.0415 | 0.8333 |
| 0.1837 | 37.0 | 8103 | 1.2016 | 0.7381 |
| 0.1148 | 38.0 | 8322 | 0.8289 | 0.7857 |
| 0.0825 | 39.0 | 8541 | 1.4711 | 0.7381 |
| 0.0828 | 40.0 | 8760 | 0.9405 | 0.8810 |
| 0.0736 | 41.0 | 8979 | 1.4104 | 0.8810 |
| 0.0864 | 42.0 | 9198 | 1.1297 | 0.8333 |
| 0.0176 | 43.0 | 9417 | 1.2293 | 0.7857 |
| 0.0392 | 44.0 | 9636 | 1.3878 | 0.8095 |
| 0.0272 | 45.0 | 9855 | 1.2021 | 0.8571 |
| 0.0125 | 46.0 | 10074 | 2.3102 | 0.7619 |
| 0.0149 | 47.0 | 10293 | 1.8621 | 0.7857 |
| 0.0032 | 48.0 | 10512 | 1.7899 | 0.8333 |
| 0.0016 | 49.0 | 10731 | 1.9528 | 0.8095 |
| 0.0001 | 50.0 | 10950 | 1.7808 | 0.8095 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
VATSAL1729/Pyramids
|
VATSAL1729
| 2023-12-24T14:39:51Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-12-24T14:38:58Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: VATSAL1729/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
yuanhuaisen/autotrain-9oj9k-0pndc
|
yuanhuaisen
| 2023-12-24T14:30:41Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:yuanhuaisen/autotrain-data-autotrain-9oj9k-0pndc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T14:30:09Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- yuanhuaisen/autotrain-data-autotrain-9oj9k-0pndc
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.5109696984291077
f1_macro: 0.7355182828867041
f1_micro: 0.7840909090909092
f1_weighted: 0.7828294512505038
precision_macro: 0.7308866944925176
precision_micro: 0.7840909090909091
precision_weighted: 0.782664525741997
recall_macro: 0.7416666666666667
recall_micro: 0.7840909090909091
recall_weighted: 0.7840909090909091
accuracy: 0.7840909090909091
|
PiyushJha/xyz
|
PiyushJha
| 2023-12-24T14:30:23Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-12-24T14:28:40Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
print("hello")
|
hkivancoral/hushem_40x_deit_tiny_sgd_001_fold5
|
hkivancoral
| 2023-12-24T14:22:40Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T13:52:03Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_sgd_001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8292682926829268
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_sgd_001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5908
- Accuracy: 0.8293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1602 | 1.0 | 220 | 1.4008 | 0.2927 |
| 0.8954 | 2.0 | 440 | 1.2274 | 0.3902 |
| 0.7859 | 3.0 | 660 | 1.0703 | 0.5366 |
| 0.6718 | 4.0 | 880 | 0.9455 | 0.6098 |
| 0.5505 | 5.0 | 1100 | 0.8399 | 0.6341 |
| 0.4372 | 6.0 | 1320 | 0.7728 | 0.7073 |
| 0.3616 | 7.0 | 1540 | 0.7172 | 0.7317 |
| 0.291 | 8.0 | 1760 | 0.7018 | 0.7317 |
| 0.2597 | 9.0 | 1980 | 0.6678 | 0.7317 |
| 0.2339 | 10.0 | 2200 | 0.6575 | 0.7073 |
| 0.2227 | 11.0 | 2420 | 0.6389 | 0.7073 |
| 0.179 | 12.0 | 2640 | 0.6500 | 0.7073 |
| 0.1598 | 13.0 | 2860 | 0.6290 | 0.7073 |
| 0.1448 | 14.0 | 3080 | 0.6491 | 0.6585 |
| 0.1209 | 15.0 | 3300 | 0.6174 | 0.7073 |
| 0.1192 | 16.0 | 3520 | 0.6084 | 0.7073 |
| 0.1037 | 17.0 | 3740 | 0.6013 | 0.7317 |
| 0.0848 | 18.0 | 3960 | 0.5985 | 0.7073 |
| 0.1048 | 19.0 | 4180 | 0.5896 | 0.7317 |
| 0.0665 | 20.0 | 4400 | 0.6043 | 0.7073 |
| 0.0723 | 21.0 | 4620 | 0.5932 | 0.7561 |
| 0.0444 | 22.0 | 4840 | 0.5749 | 0.8049 |
| 0.0448 | 23.0 | 5060 | 0.5862 | 0.7805 |
| 0.0396 | 24.0 | 5280 | 0.5758 | 0.8049 |
| 0.0378 | 25.0 | 5500 | 0.5566 | 0.8293 |
| 0.0428 | 26.0 | 5720 | 0.5740 | 0.8293 |
| 0.0345 | 27.0 | 5940 | 0.5631 | 0.8049 |
| 0.0515 | 28.0 | 6160 | 0.5844 | 0.8049 |
| 0.0324 | 29.0 | 6380 | 0.5872 | 0.8293 |
| 0.0292 | 30.0 | 6600 | 0.5789 | 0.8293 |
| 0.0208 | 31.0 | 6820 | 0.5688 | 0.8293 |
| 0.0421 | 32.0 | 7040 | 0.5703 | 0.8293 |
| 0.0246 | 33.0 | 7260 | 0.5663 | 0.8293 |
| 0.0318 | 34.0 | 7480 | 0.5726 | 0.8293 |
| 0.0151 | 35.0 | 7700 | 0.5751 | 0.8293 |
| 0.0169 | 36.0 | 7920 | 0.5772 | 0.8293 |
| 0.017 | 37.0 | 8140 | 0.5665 | 0.8293 |
| 0.0393 | 38.0 | 8360 | 0.5815 | 0.8293 |
| 0.0218 | 39.0 | 8580 | 0.5765 | 0.8293 |
| 0.0156 | 40.0 | 8800 | 0.5742 | 0.8293 |
| 0.0183 | 41.0 | 9020 | 0.5956 | 0.8293 |
| 0.0155 | 42.0 | 9240 | 0.5886 | 0.8293 |
| 0.0134 | 43.0 | 9460 | 0.5775 | 0.8293 |
| 0.0186 | 44.0 | 9680 | 0.5921 | 0.8293 |
| 0.0177 | 45.0 | 9900 | 0.5863 | 0.8293 |
| 0.0115 | 46.0 | 10120 | 0.5918 | 0.8293 |
| 0.0196 | 47.0 | 10340 | 0.5892 | 0.8293 |
| 0.0172 | 48.0 | 10560 | 0.5892 | 0.8293 |
| 0.0129 | 49.0 | 10780 | 0.5910 | 0.8293 |
| 0.0197 | 50.0 | 11000 | 0.5908 | 0.8293 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Sigurdur/ice-roberta
|
Sigurdur
| 2023-12-24T14:20:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"is",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-12-24T14:09:31Z |
---
language:
- is
library_name: transformers
pipeline_tag: fill-mask
---
# rICE - Roberta based Icelandic masked language model
A masked language model trained on the [IGC-News1](http://hdl.handle.net/20.500.12537/236) dataset.
The project was inpired by this article https://doi.org/10.48550/arXiv.2201.05601
# Author
Sigurdur Haukur Birgisson
|
hoangquang27/llama-2-7b-chat
|
hoangquang27
| 2023-12-24T14:18:13Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-12-19T07:43:31Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
torch-uncertainty/msk_resnet18_c10
|
torch-uncertainty
| 2023-12-24T14:15:27Z | 0 | 0 | null |
[
"vision",
"classification",
"uncertainty",
"dataset:cifar-10",
"license:apache-2.0",
"region:us"
] | null | 2023-12-22T14:43:30Z |
---
license: apache-2.0
tags:
- vision
- classification
- uncertainty
datasets:
- cifar-10
---
# Masksembles ResNet trained on CIFAR-10
## How to use
Download [TorchUncertainty](https://torch-uncertainty.github.io/) - [GitHub](https://github.com/ENSTA-U2IS/torch-uncertainty) to use this model.
## License
These weights are provided under the Apache 2.0 license.
|
bmv202199/Spongebob
|
bmv202199
| 2023-12-24T14:06:08Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-12-24T14:04:51Z |
---
license: bigscience-openrail-m
---
|
hkivancoral/hushem_40x_deit_base_rms_0001_fold3
|
hkivancoral
| 2023-12-24T14:05:29Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T13:22:12Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_rms_0001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8372093023255814
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_rms_0001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3610
- Accuracy: 0.8372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0864 | 1.0 | 217 | 0.7004 | 0.8140 |
| 0.0159 | 2.0 | 434 | 0.8536 | 0.8372 |
| 0.116 | 3.0 | 651 | 0.4860 | 0.9070 |
| 0.0015 | 4.0 | 868 | 0.3942 | 0.9302 |
| 0.0089 | 5.0 | 1085 | 0.6012 | 0.8372 |
| 0.0001 | 6.0 | 1302 | 0.5930 | 0.8605 |
| 0.0006 | 7.0 | 1519 | 0.5592 | 0.8837 |
| 0.15 | 8.0 | 1736 | 0.5307 | 0.8837 |
| 0.0001 | 9.0 | 1953 | 0.5223 | 0.8372 |
| 0.0766 | 10.0 | 2170 | 0.7047 | 0.8372 |
| 0.0001 | 11.0 | 2387 | 1.3810 | 0.8140 |
| 0.0061 | 12.0 | 2604 | 1.1687 | 0.8140 |
| 0.0 | 13.0 | 2821 | 1.4554 | 0.8140 |
| 0.0 | 14.0 | 3038 | 1.4775 | 0.8372 |
| 0.0 | 15.0 | 3255 | 1.5402 | 0.8140 |
| 0.0 | 16.0 | 3472 | 1.6119 | 0.8140 |
| 0.0 | 17.0 | 3689 | 1.6931 | 0.8140 |
| 0.0 | 18.0 | 3906 | 1.7745 | 0.8140 |
| 0.0 | 19.0 | 4123 | 1.8507 | 0.8372 |
| 0.0 | 20.0 | 4340 | 1.9114 | 0.8372 |
| 0.0 | 21.0 | 4557 | 1.9677 | 0.8372 |
| 0.0 | 22.0 | 4774 | 2.0255 | 0.8372 |
| 0.0 | 23.0 | 4991 | 2.0805 | 0.8372 |
| 0.0 | 24.0 | 5208 | 2.1308 | 0.8372 |
| 0.0 | 25.0 | 5425 | 2.1719 | 0.8372 |
| 0.0 | 26.0 | 5642 | 2.2040 | 0.8372 |
| 0.0 | 27.0 | 5859 | 2.2288 | 0.8372 |
| 0.0 | 28.0 | 6076 | 2.2485 | 0.8372 |
| 0.0 | 29.0 | 6293 | 2.2646 | 0.8372 |
| 0.0 | 30.0 | 6510 | 2.2781 | 0.8372 |
| 0.0 | 31.0 | 6727 | 2.2896 | 0.8372 |
| 0.0 | 32.0 | 6944 | 2.2995 | 0.8372 |
| 0.0 | 33.0 | 7161 | 2.3082 | 0.8372 |
| 0.0 | 34.0 | 7378 | 2.3158 | 0.8372 |
| 0.0 | 35.0 | 7595 | 2.3224 | 0.8372 |
| 0.0 | 36.0 | 7812 | 2.3283 | 0.8372 |
| 0.0 | 37.0 | 8029 | 2.3335 | 0.8372 |
| 0.0 | 38.0 | 8246 | 2.3381 | 0.8372 |
| 0.0 | 39.0 | 8463 | 2.3422 | 0.8372 |
| 0.0 | 40.0 | 8680 | 2.3458 | 0.8372 |
| 0.0 | 41.0 | 8897 | 2.3489 | 0.8372 |
| 0.0 | 42.0 | 9114 | 2.3516 | 0.8372 |
| 0.0 | 43.0 | 9331 | 2.3540 | 0.8372 |
| 0.0 | 44.0 | 9548 | 2.3560 | 0.8372 |
| 0.0 | 45.0 | 9765 | 2.3576 | 0.8372 |
| 0.0 | 46.0 | 9982 | 2.3589 | 0.8372 |
| 0.0 | 47.0 | 10199 | 2.3599 | 0.8372 |
| 0.0 | 48.0 | 10416 | 2.3606 | 0.8372 |
| 0.0 | 49.0 | 10633 | 2.3610 | 0.8372 |
| 0.0 | 50.0 | 10850 | 2.3610 | 0.8372 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
irishzhang/my_awesome_qa_model
|
irishzhang
| 2023-12-24T14:02:18Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-12-24T13:48:52Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.3899 |
| 2.7762 | 2.0 | 500 | 1.8462 |
| 2.7762 | 3.0 | 750 | 1.7354 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0
- Datasets 2.14.5
- Tokenizers 0.14.0
|
hkivancoral/hushem_40x_deit_base_rms_001_fold3
|
hkivancoral
| 2023-12-24T14:02:06Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T13:19:18Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_rms_001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5581395348837209
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_rms_001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0356
- Accuracy: 0.5581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1943 | 1.0 | 217 | 1.3862 | 0.3488 |
| 1.2108 | 2.0 | 434 | 1.3456 | 0.3721 |
| 0.8764 | 3.0 | 651 | 1.3683 | 0.4884 |
| 0.7995 | 4.0 | 868 | 0.8441 | 0.5814 |
| 0.8665 | 5.0 | 1085 | 1.2083 | 0.5116 |
| 0.7433 | 6.0 | 1302 | 0.7858 | 0.7209 |
| 0.7205 | 7.0 | 1519 | 0.8439 | 0.6744 |
| 0.6415 | 8.0 | 1736 | 0.6198 | 0.6512 |
| 0.6773 | 9.0 | 1953 | 0.8169 | 0.6744 |
| 0.5449 | 10.0 | 2170 | 0.8224 | 0.6512 |
| 0.5225 | 11.0 | 2387 | 0.7556 | 0.7209 |
| 0.5268 | 12.0 | 2604 | 0.8703 | 0.6744 |
| 0.41 | 13.0 | 2821 | 0.7919 | 0.6512 |
| 0.4695 | 14.0 | 3038 | 0.9473 | 0.6744 |
| 0.3173 | 15.0 | 3255 | 1.2235 | 0.6512 |
| 0.3283 | 16.0 | 3472 | 1.3091 | 0.6512 |
| 0.3212 | 17.0 | 3689 | 1.0773 | 0.6047 |
| 0.3662 | 18.0 | 3906 | 0.9193 | 0.6279 |
| 0.3712 | 19.0 | 4123 | 0.9811 | 0.6744 |
| 0.3483 | 20.0 | 4340 | 1.5620 | 0.5814 |
| 0.2594 | 21.0 | 4557 | 1.8035 | 0.5814 |
| 0.3019 | 22.0 | 4774 | 1.3880 | 0.6744 |
| 0.2498 | 23.0 | 4991 | 1.6113 | 0.5814 |
| 0.2349 | 24.0 | 5208 | 1.2780 | 0.6047 |
| 0.1589 | 25.0 | 5425 | 1.6674 | 0.6512 |
| 0.2341 | 26.0 | 5642 | 1.6966 | 0.6512 |
| 0.1986 | 27.0 | 5859 | 1.4673 | 0.6047 |
| 0.1141 | 28.0 | 6076 | 1.6993 | 0.6512 |
| 0.1291 | 29.0 | 6293 | 2.0265 | 0.5581 |
| 0.1273 | 30.0 | 6510 | 1.8689 | 0.6279 |
| 0.0887 | 31.0 | 6727 | 1.4863 | 0.6977 |
| 0.101 | 32.0 | 6944 | 2.2258 | 0.6279 |
| 0.09 | 33.0 | 7161 | 1.6918 | 0.5814 |
| 0.063 | 34.0 | 7378 | 2.4040 | 0.5349 |
| 0.0263 | 35.0 | 7595 | 2.2869 | 0.5814 |
| 0.0357 | 36.0 | 7812 | 2.0118 | 0.6047 |
| 0.033 | 37.0 | 8029 | 2.5046 | 0.6279 |
| 0.0417 | 38.0 | 8246 | 2.0462 | 0.6512 |
| 0.0049 | 39.0 | 8463 | 3.1349 | 0.5814 |
| 0.0034 | 40.0 | 8680 | 2.4922 | 0.6279 |
| 0.0115 | 41.0 | 8897 | 2.7021 | 0.5581 |
| 0.0248 | 42.0 | 9114 | 3.1496 | 0.5116 |
| 0.0078 | 43.0 | 9331 | 2.6336 | 0.6279 |
| 0.0022 | 44.0 | 9548 | 3.2458 | 0.5349 |
| 0.0015 | 45.0 | 9765 | 3.3966 | 0.5349 |
| 0.0031 | 46.0 | 9982 | 4.1353 | 0.5116 |
| 0.0 | 47.0 | 10199 | 3.5481 | 0.5814 |
| 0.0002 | 48.0 | 10416 | 3.8712 | 0.5349 |
| 0.0 | 49.0 | 10633 | 4.0305 | 0.5581 |
| 0.0 | 50.0 | 10850 | 4.0356 | 0.5581 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/hushem_40x_deit_tiny_sgd_001_fold4
|
hkivancoral
| 2023-12-24T13:51:54Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T13:21:53Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_sgd_001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8809523809523809
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_sgd_001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2388
- Accuracy: 0.8810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2231 | 1.0 | 219 | 1.3846 | 0.2857 |
| 0.9485 | 2.0 | 438 | 1.1776 | 0.5238 |
| 0.8421 | 3.0 | 657 | 0.9985 | 0.6429 |
| 0.6802 | 4.0 | 876 | 0.8236 | 0.7381 |
| 0.5815 | 5.0 | 1095 | 0.6866 | 0.7857 |
| 0.5091 | 6.0 | 1314 | 0.5853 | 0.8095 |
| 0.3792 | 7.0 | 1533 | 0.5105 | 0.8333 |
| 0.3552 | 8.0 | 1752 | 0.4443 | 0.8333 |
| 0.3174 | 9.0 | 1971 | 0.4029 | 0.8810 |
| 0.2621 | 10.0 | 2190 | 0.3730 | 0.8571 |
| 0.2168 | 11.0 | 2409 | 0.3473 | 0.8571 |
| 0.2263 | 12.0 | 2628 | 0.3296 | 0.9048 |
| 0.1689 | 13.0 | 2847 | 0.3233 | 0.9048 |
| 0.171 | 14.0 | 3066 | 0.3040 | 0.8810 |
| 0.1176 | 15.0 | 3285 | 0.3059 | 0.8810 |
| 0.1241 | 16.0 | 3504 | 0.2811 | 0.8571 |
| 0.1343 | 17.0 | 3723 | 0.2712 | 0.8571 |
| 0.0953 | 18.0 | 3942 | 0.2802 | 0.8571 |
| 0.0918 | 19.0 | 4161 | 0.2700 | 0.8571 |
| 0.0691 | 20.0 | 4380 | 0.2755 | 0.8571 |
| 0.088 | 21.0 | 4599 | 0.2615 | 0.8571 |
| 0.0857 | 22.0 | 4818 | 0.2483 | 0.8571 |
| 0.0654 | 23.0 | 5037 | 0.2562 | 0.8571 |
| 0.0661 | 24.0 | 5256 | 0.2789 | 0.8571 |
| 0.0463 | 25.0 | 5475 | 0.2435 | 0.8571 |
| 0.0362 | 26.0 | 5694 | 0.2633 | 0.8571 |
| 0.0272 | 27.0 | 5913 | 0.2844 | 0.8571 |
| 0.041 | 28.0 | 6132 | 0.2942 | 0.8571 |
| 0.034 | 29.0 | 6351 | 0.2744 | 0.8571 |
| 0.0352 | 30.0 | 6570 | 0.2644 | 0.8810 |
| 0.0212 | 31.0 | 6789 | 0.2648 | 0.8810 |
| 0.0359 | 32.0 | 7008 | 0.2431 | 0.8810 |
| 0.0203 | 33.0 | 7227 | 0.2434 | 0.8810 |
| 0.0209 | 34.0 | 7446 | 0.2577 | 0.8810 |
| 0.0254 | 35.0 | 7665 | 0.2645 | 0.8810 |
| 0.0178 | 36.0 | 7884 | 0.2497 | 0.8810 |
| 0.0232 | 37.0 | 8103 | 0.2639 | 0.8810 |
| 0.015 | 38.0 | 8322 | 0.2391 | 0.8810 |
| 0.0246 | 39.0 | 8541 | 0.2615 | 0.8810 |
| 0.0228 | 40.0 | 8760 | 0.2445 | 0.8810 |
| 0.0203 | 41.0 | 8979 | 0.2448 | 0.8810 |
| 0.014 | 42.0 | 9198 | 0.2402 | 0.8810 |
| 0.0231 | 43.0 | 9417 | 0.2372 | 0.8810 |
| 0.0179 | 44.0 | 9636 | 0.2499 | 0.8810 |
| 0.0197 | 45.0 | 9855 | 0.2540 | 0.8810 |
| 0.0118 | 46.0 | 10074 | 0.2416 | 0.8810 |
| 0.0134 | 47.0 | 10293 | 0.2401 | 0.8810 |
| 0.0155 | 48.0 | 10512 | 0.2412 | 0.8810 |
| 0.0103 | 49.0 | 10731 | 0.2400 | 0.8810 |
| 0.0156 | 50.0 | 10950 | 0.2388 | 0.8810 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
ntc-ai/SDXL-LoRA-slider.radiant-green-eyes
|
ntc-ai
| 2023-12-24T13:45:15Z | 131 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2023-12-24T13:45:12Z |
---
language:
- en
thumbnail: "images/evaluate/radiant green eyes.../radiant green eyes_17_3.0.png"
widget:
- text: radiant green eyes
output:
url: images/radiant green eyes_17_3.0.png
- text: radiant green eyes
output:
url: images/radiant green eyes_19_3.0.png
- text: radiant green eyes
output:
url: images/radiant green eyes_20_3.0.png
- text: radiant green eyes
output:
url: images/radiant green eyes_21_3.0.png
- text: radiant green eyes
output:
url: images/radiant green eyes_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "radiant green eyes"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - radiant green eyes (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/radiant green eyes_17_-3.0.png" width=256 height=256 /> | <img src="images/radiant green eyes_17_0.0.png" width=256 height=256 /> | <img src="images/radiant green eyes_17_3.0.png" width=256 height=256 /> |
| <img src="images/radiant green eyes_19_-3.0.png" width=256 height=256 /> | <img src="images/radiant green eyes_19_0.0.png" width=256 height=256 /> | <img src="images/radiant green eyes_19_3.0.png" width=256 height=256 /> |
| <img src="images/radiant green eyes_20_-3.0.png" width=256 height=256 /> | <img src="images/radiant green eyes_20_0.0.png" width=256 height=256 /> | <img src="images/radiant green eyes_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
radiant green eyes
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.radiant-green-eyes', weight_name='radiant green eyes.safetensors', adapter_name="radiant green eyes")
# Activate the LoRA
pipe.set_adapters(["radiant green eyes"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, radiant green eyes"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 590+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
baichuan-inc/Baichuan2-7B-Chat-4bits
|
baichuan-inc
| 2023-12-24T13:38:33Z | 80 | 57 |
transformers
|
[
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"en",
"zh",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2023-08-30T10:11:39Z |
---
language:
- en
- zh
license: other
tasks:
- text-generation
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<div align="center">
<h1>
Baichuan 2
</h1>
</div>
<div align="center">
<a href="https://github.com/baichuan-inc/Baichuan2" target="_blank">🦉GitHub</a> | <a href="https://github.com/baichuan-inc/Baichuan-7B/blob/main/media/wechat.jpeg?raw=true" target="_blank">💬WeChat</a>
</div>
<div align="center">
百川API支持搜索增强和192K长窗口,新增百川搜索增强知识库、限时免费!<br>
🚀 <a href="https://www.baichuan-ai.com/" target="_blank">百川大模型在线对话平台</a> 已正式向公众开放 🎉
</div>
# 目录/Table of Contents
- [📖 模型介绍/Introduction](#Introduction)
- [⚙️ 快速开始/Quick Start](#Start)
- [📊 Benchmark评估/Benchmark Evaluation](#Benchmark)
- [📜 声明与协议/Terms and Conditions](#Terms)
# <span id="Introduction">模型介绍/Introduction</span>
Baichuan 2 是[百川智能]推出的新一代开源大语言模型,采用 **2.6 万亿** Tokens 的高质量语料训练,在权威的中文和英文 benchmark
上均取得同尺寸最好的效果。本次发布包含有 7B、13B 的 Base 和 Chat 版本,并提供了 Chat 版本的 4bits
量化,所有版本不仅对学术研究完全开放,开发者也仅需[邮件申请]并获得官方商用许可后,即可以免费商用。具体发布版本和下载见下表:
Baichuan 2 is the new generation of large-scale open-source language models launched by [Baichuan Intelligence inc.](https://www.baichuan-ai.com/).
It is trained on a high-quality corpus with 2.6 trillion tokens and has achieved the best performance in authoritative Chinese and English benchmarks of the same size.
This release includes 7B and 13B versions for both Base and Chat models, along with a 4bits quantized version for the Chat model.
All versions are fully open to academic research, and developers can also use them for free in commercial applications after obtaining an official commercial license through [email request](mailto:[email protected]).
The specific release versions and download links are listed in the table below:
| | Base Model | Chat Model | 4bits Quantized Chat Model |
|:---:|:--------------------:|:--------------------:|:--------------------------:|
| 7B | [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) | [Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) | [Baichuan2-7B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base-4bits) |
| 13B | [Baichuan2-13B-Base](https://huggingface.co/baichuan-inc/Baichuan2-13B-Base) | [Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) | [Baichuan2-13B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits) |
# <span id="Start">快速开始/Quick Start</span>
在Baichuan2系列模型中,我们为了加快推理速度使用了Pytorch2.0加入的新功能F.scaled_dot_product_attention,因此模型需要在Pytorch2.0环境下运行。
In the Baichuan 2 series models, we have utilized the new feature `F.scaled_dot_product_attention` introduced in PyTorch 2.0 to accelerate inference speed. Therefore, the model needs to be run in a PyTorch 2.0 environment.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation.utils import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan2-7B-Chat-4bits", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan2-7B-Chat-4bits", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
model.generation_config = GenerationConfig.from_pretrained("baichuan-inc/Baichuan2-7B-Chat-4bits")
messages = []
messages.append({"role": "user", "content": "解释一下“温故而知新”"})
response = model.chat(tokenizer, messages)
print(response)
"温故而知新"是一句中国古代的成语,出自《论语·为政》篇。这句话的意思是:通过回顾过去,我们可以发现新的知识和理解。换句话说,学习历史和经验可以让我们更好地理解现在和未来。
这句话鼓励我们在学习和生活中不断地回顾和反思过去的经验,从而获得新的启示和成长。通过重温旧的知识和经历,我们可以发现新的观点和理解,从而更好地应对不断变化的世界和挑战。
```
# <span id="Benchmark">Benchmark 结果/Benchmark Evaluation</span>
我们在[通用]、[法律]、[医疗]、[数学]、[代码]和[多语言翻译]六个领域的中英文权威数据集上对模型进行了广泛测试,更多详细测评结果可查看[GitHub]。
We have extensively tested the model on authoritative Chinese-English datasets across six domains: [General](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#general-domain), [Legal](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Medical](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Mathematics](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), [Code](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), and [Multilingual Translation](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#multilingual-translation). For more detailed evaluation results, please refer to [GitHub](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md).
### 7B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:-----------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-7B** | 27.10 | 35.10 | 26.75 | 27.81 | 28.17 | 32.38 |
| **LLaMA2-7B** | 28.90 | 45.73 | 31.38 | 25.97 | 26.53 | 39.16 |
| **MPT-7B** | 27.15 | 27.93 | 26.00 | 26.54 | 24.83 | 35.20 |
| **Falcon-7B** | 24.23 | 26.03 | 25.66 | 24.24 | 24.10 | 28.77 |
| **ChatGLM2-6B** | 50.20 | 45.90 | 49.00 | 49.44 | 45.28 | 31.65 |
| **[Baichuan-7B]** | 42.80 | 42.30 | 44.02 | 36.34 | 34.44 | 32.48 |
| **[Baichuan2-7B-Base]** | 54.00 | 54.16 | 57.07 | 47.47 | 42.73 | 41.56 |
### 13B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:---------------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-13B** | 28.50 | 46.30 | 31.15 | 28.23 | 28.22 | 37.89 |
| **LLaMA2-13B** | 35.80 | 55.09 | 37.99 | 30.83 | 32.29 | 46.98 |
| **Vicuna-13B** | 32.80 | 52.00 | 36.28 | 30.11 | 31.55 | 43.04 |
| **Chinese-Alpaca-Plus-13B** | 38.80 | 43.90 | 33.43 | 34.78 | 35.46 | 28.94 |
| **XVERSE-13B** | 53.70 | 55.21 | 58.44 | 44.69 | 42.54 | 38.06 |
| **[Baichuan-13B-Base]** | 52.40 | 51.60 | 55.30 | 49.69 | 43.20 | 43.01 |
| **[Baichuan2-13B-Base]** | 58.10 | 59.17 | 61.97 | 54.33 | 48.17 | 48.78 |
## 训练过程模型/Training Dynamics
除了训练了 2.6 万亿 Tokens 的 [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) 模型,我们还提供了在此之前的另外 11 个中间过程的模型(分别对应训练了约 0.2 ~ 2.4 万亿 Tokens)供社区研究使用
([训练过程checkpoint下载](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints))。下图给出了这些 checkpoints 在 C-Eval、MMLU、CMMLU 三个 benchmark 上的效果变化:
In addition to the [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) model trained on 2.6 trillion tokens, we also offer 11 additional intermediate-stage models for community research, corresponding to training on approximately 0.2 to 2.4 trillion tokens each ([Intermediate Checkpoints Download](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints)). The graph below shows the performance changes of these checkpoints on three benchmarks: C-Eval, MMLU, and CMMLU.

# <span id="Terms">声明与协议/Terms and Conditions</span>
## 声明
我们在此声明,我们的开发团队并未基于 Baichuan 2 模型开发任何应用,无论是在 iOS、Android、网页或任何其他平台。我们强烈呼吁所有使用者,不要利用
Baichuan 2 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Baichuan 2
模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。
我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用
Baichuan 2 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
We hereby declare that our team has not developed any applications based on Baichuan 2 models, not on iOS, Android, the web, or any other platform. We strongly call on all users not to use Baichuan 2 models for any activities that harm national / social security or violate the law. Also, we ask users not to use Baichuan 2 models for Internet services that have not undergone appropriate security reviews and filings. We hope that all users can abide by this principle and ensure that the development of technology proceeds in a regulated and legal environment.
We have done our best to ensure the compliance of the data used in the model training process. However, despite our considerable efforts, there may still be some unforeseeable issues due to the complexity of the model and data. Therefore, if any problems arise due to the use of Baichuan 2 open-source models, including but not limited to data security issues, public opinion risks, or any risks and problems brought about by the model being misled, abused, spread or improperly exploited, we will not assume any responsibility.
## 协议
社区使用 Baichuan 2 模型需要遵循 [Apache 2.0](https://github.com/baichuan-inc/Baichuan2/blob/main/LICENSE) 和[《Baichuan 2 模型社区许可协议》](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)。Baichuan 2 模型支持商业用途,如果您计划将 Baichuan 2 模型或其衍生品用于商业目的,请您确认您的主体符合以下情况:
1. 您或您的关联方的服务或产品的日均用户活跃量(DAU)低于100万。
2. 您或您的关联方不是软件服务提供商、云服务提供商。
3. 您或您的关联方不存在将授予您的商用许可,未经百川许可二次授权给其他第三方的可能。
在符合以上条件的前提下,您需要通过以下联系邮箱 [email protected] ,提交《Baichuan 2 模型社区许可协议》要求的申请材料。审核通过后,百川将特此授予您一个非排他性、全球性、不可转让、不可再许可、可撤销的商用版权许可。
The community usage of Baichuan 2 model requires adherence to [Apache 2.0](https://github.com/baichuan-inc/Baichuan2/blob/main/LICENSE) and [Community License for Baichuan2 Model](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf). The Baichuan 2 model supports commercial use. If you plan to use the Baichuan 2 model or its derivatives for commercial purposes, please ensure that your entity meets the following conditions:
1. The Daily Active Users (DAU) of your or your affiliate's service or product is less than 1 million.
2. Neither you nor your affiliates are software service providers or cloud service providers.
3. There is no possibility for you or your affiliates to grant the commercial license given to you, to reauthorize it to other third parties without Baichuan's permission.
Upon meeting the above conditions, you need to submit the application materials required by the Baichuan 2 Model Community License Agreement via the following contact email: [email protected]. Once approved, Baichuan will hereby grant you a non-exclusive, global, non-transferable, non-sublicensable, revocable commercial copyright license.
[GitHub]:https://github.com/baichuan-inc/Baichuan2
[Baichuan2]:https://github.com/baichuan-inc/Baichuan2
[Baichuan-7B]:https://huggingface.co/baichuan-inc/Baichuan-7B
[Baichuan2-7B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base
[Baichuan2-7B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat
[Baichuan2-7B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat-4bits
[Baichuan-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan-13B-Base
[Baichuan2-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Base
[Baichuan2-13B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat
[Baichuan2-13B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits
[通用]:https://github.com/baichuan-inc/Baichuan2#%E9%80%9A%E7%94%A8%E9%A2%86%E5%9F%9F
[法律]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[医疗]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[数学]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[代码]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[多语言翻译]:https://github.com/baichuan-inc/Baichuan2#%E5%A4%9A%E8%AF%AD%E8%A8%80%E7%BF%BB%E8%AF%91
[《Baichuan 2 模型社区许可协议》]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf
[邮件申请]: mailto:[email protected]
[Email]: mailto:[email protected]
[[email protected]]: mailto:[email protected]
[训练过程heckpoint下载]: https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints
[百川智能]: https://www.baichuan-ai.com
|
xiawei910/Taxi-v3
|
xiawei910
| 2023-12-24T13:35:05Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-21T07:45:05Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.67
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="xiawei910/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
xiawei910/q-FrozenLake-v1-4x4-noSlippery
|
xiawei910
| 2023-12-24T13:33:51Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-20T13:02:32Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="xiawei910/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
getdiffus/SDaB1-Detail-Tweaker-LoRA-LoRA
|
getdiffus
| 2023-12-24T13:29:30Z | 0 | 0 | null |
[
"StableDiffusion",
"GetDiffus",
"anime",
"photorealistic",
"concept",
"detailed",
"lora",
"undetailed",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-24T13:29:11Z |
---
license: creativeml-openrail-m
tags:
- StableDiffusion
- GetDiffus
- anime
- photorealistic
- concept
- detailed
- lora
- undetailed
---
# SDaB1-Detail-Tweaker-LoRA-LoRA
SDID: **SDaB1**
For details on this model and how to use it, or to find more models, visit [https://getdiffus.com/m/SDaB1/Detail Tweaker LoRA (细节调整LoRA)](https://getdiffus.com/m/SDaB1/Detail Tweaker LoRA (细节调整LoRA))
GetDiffus is a model sharing site. It supports you to upload, search, and discover Stable Diffusion models. It stores models on Huggingface.
## Links
This StableDiffusion model uploaded by [@lightning-joyce](https://huggingface.co/lightning-joyce).
Follow me on X(Twitter): https://x.com/lightning_joyce
Join our Discord: https://discord.gg/NR7bJXKFpX
|
baichuan-inc/Baichuan2-13B-Base
|
baichuan-inc
| 2023-12-24T13:25:24Z | 1,113 | 78 |
transformers
|
[
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"en",
"zh",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-30T10:11:24Z |
---
language:
- en
- zh
license: other
tasks:
- text-generation
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<div align="center">
<h1>
Baichuan 2
</h1>
</div>
<div align="center">
<a href="https://github.com/baichuan-inc/Baichuan2" target="_blank">🦉GitHub</a> | <a href="https://github.com/baichuan-inc/Baichuan-7B/blob/main/media/wechat.jpeg?raw=true" target="_blank">💬WeChat</a>
</div>
<div align="center">
百川API支持搜索增强和192K长窗口,新增百川搜索增强知识库、限时免费!<br>
🚀 <a href="https://www.baichuan-ai.com/" target="_blank">百川大模型在线对话平台</a> 已正式向公众开放 🎉
</div>
# 目录/Table of Contents
- [📖 模型介绍/Introduction](#Introduction)
- [⚙️ 快速开始/Quick Start](#Start)
- [📊 Benchmark评估/Benchmark Evaluation](#Benchmark)
- [📜 声明与协议/Terms and Conditions](#Terms)
# <span id="Introduction">模型介绍/Introduction</span>
Baichuan 2 是[百川智能]推出的新一代开源大语言模型,采用 **2.6 万亿** Tokens 的高质量语料训练,在权威的中文和英文 benchmark
上均取得同尺寸最好的效果。本次发布包含有 7B、13B 的 Base 和 Chat 版本,并提供了 Chat 版本的 4bits
量化,所有版本不仅对学术研究完全开放,开发者也仅需[邮件申请]并获得官方商用许可后,即可以免费商用。具体发布版本和下载见下表:
Baichuan 2 is the new generation of large-scale open-source language models launched by [Baichuan Intelligence inc.](https://www.baichuan-ai.com/).
It is trained on a high-quality corpus with 2.6 trillion tokens and has achieved the best performance in authoritative Chinese and English benchmarks of the same size.
This release includes 7B and 13B versions for both Base and Chat models, along with a 4bits quantized version for the Chat model.
All versions are fully open to academic research, and developers can also use them for free in commercial applications after obtaining an official commercial license through [email request](mailto:[email protected]).
The specific release versions and download links are listed in the table below:
| | Base Model | Chat Model | 4bits Quantized Chat Model |
|:---:|:--------------------:|:--------------------:|:--------------------------:|
| 7B | [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) | [Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) | [Baichuan2-7B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base-4bits) |
| 13B | [Baichuan2-13B-Base](https://huggingface.co/baichuan-inc/Baichuan2-13B-Base) | [Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) | [Baichuan2-13B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits) |
# <span id="Start">快速开始/Quick Start</span>
在Baichuan2系列模型中,我们为了加快推理速度使用了Pytorch2.0加入的新功能F.scaled_dot_product_attention,因此模型需要在Pytorch2.0环境下运行。
In the Baichuan 2 series models, we have utilized the new feature `F.scaled_dot_product_attention` introduced in PyTorch 2.0 to accelerate inference speed. Therefore, the model needs to be run in a PyTorch 2.0 environment.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan2-13B-Base", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan2-13B-Base", device_map="auto", trust_remote_code=True)
inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
inputs = inputs.to('cuda:0')
pred = model.generate(**inputs, max_new_tokens=64, repetition_penalty=1.1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
# <span id="Benchmark">Benchmark 结果/Benchmark Evaluation</span>
我们在[通用]、[法律]、[医疗]、[数学]、[代码]和[多语言翻译]六个领域的中英文权威数据集上对模型进行了广泛测试,更多详细测评结果可查看[GitHub]。
We have extensively tested the model on authoritative Chinese-English datasets across six domains: [General](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#general-domain), [Legal](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Medical](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Mathematics](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), [Code](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), and [Multilingual Translation](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#multilingual-translation). For more detailed evaluation results, please refer to [GitHub](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md).
### 7B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:-----------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-7B** | 27.10 | 35.10 | 26.75 | 27.81 | 28.17 | 32.38 |
| **LLaMA2-7B** | 28.90 | 45.73 | 31.38 | 25.97 | 26.53 | 39.16 |
| **MPT-7B** | 27.15 | 27.93 | 26.00 | 26.54 | 24.83 | 35.20 |
| **Falcon-7B** | 24.23 | 26.03 | 25.66 | 24.24 | 24.10 | 28.77 |
| **ChatGLM2-6B** | 50.20 | 45.90 | 49.00 | 49.44 | 45.28 | 31.65 |
| **[Baichuan-7B]** | 42.80 | 42.30 | 44.02 | 36.34 | 34.44 | 32.48 |
| **[Baichuan2-7B-Base]** | 54.00 | 54.16 | 57.07 | 47.47 | 42.73 | 41.56 |
### 13B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:---------------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-13B** | 28.50 | 46.30 | 31.15 | 28.23 | 28.22 | 37.89 |
| **LLaMA2-13B** | 35.80 | 55.09 | 37.99 | 30.83 | 32.29 | 46.98 |
| **Vicuna-13B** | 32.80 | 52.00 | 36.28 | 30.11 | 31.55 | 43.04 |
| **Chinese-Alpaca-Plus-13B** | 38.80 | 43.90 | 33.43 | 34.78 | 35.46 | 28.94 |
| **XVERSE-13B** | 53.70 | 55.21 | 58.44 | 44.69 | 42.54 | 38.06 |
| **[Baichuan-13B-Base]** | 52.40 | 51.60 | 55.30 | 49.69 | 43.20 | 43.01 |
| **[Baichuan2-13B-Base]** | 58.10 | 59.17 | 61.97 | 54.33 | 48.17 | 48.78 |
## 训练过程模型/Training Dynamics
除了训练了 2.6 万亿 Tokens 的 [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) 模型,我们还提供了在此之前的另外 11 个中间过程的模型(分别对应训练了约 0.2 ~ 2.4 万亿 Tokens)供社区研究使用
([训练过程checkpoint下载](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints))。下图给出了这些 checkpoints 在 C-Eval、MMLU、CMMLU 三个 benchmark 上的效果变化:
In addition to the [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) model trained on 2.6 trillion tokens, we also offer 11 additional intermediate-stage models for community research, corresponding to training on approximately 0.2 to 2.4 trillion tokens each ([Intermediate Checkpoints Download](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints)). The graph below shows the performance changes of these checkpoints on three benchmarks: C-Eval, MMLU, and CMMLU.

# <span id="Terms">声明与协议/Terms and Conditions</span>
## 声明
我们在此声明,我们的开发团队并未基于 Baichuan 2 模型开发任何应用,无论是在 iOS、Android、网页或任何其他平台。我们强烈呼吁所有使用者,不要利用
Baichuan 2 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Baichuan 2
模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。
我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用
Baichuan 2 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
We hereby declare that our team has not developed any applications based on Baichuan 2 models, not on iOS, Android, the web, or any other platform. We strongly call on all users not to use Baichuan 2 models for any activities that harm national / social security or violate the law. Also, we ask users not to use Baichuan 2 models for Internet services that have not undergone appropriate security reviews and filings. We hope that all users can abide by this principle and ensure that the development of technology proceeds in a regulated and legal environment.
We have done our best to ensure the compliance of the data used in the model training process. However, despite our considerable efforts, there may still be some unforeseeable issues due to the complexity of the model and data. Therefore, if any problems arise due to the use of Baichuan 2 open-source models, including but not limited to data security issues, public opinion risks, or any risks and problems brought about by the model being misled, abused, spread or improperly exploited, we will not assume any responsibility.
## 协议
社区使用 Baichuan 2 模型需要遵循 [Apache 2.0](https://github.com/baichuan-inc/Baichuan2/blob/main/LICENSE) 和[《Baichuan 2 模型社区许可协议》](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)。Baichuan 2 模型支持商业用途,如果您计划将 Baichuan 2 模型或其衍生品用于商业目的,请您确认您的主体符合以下情况:
1. 您或您的关联方的服务或产品的日均用户活跃量(DAU)低于100万。
2. 您或您的关联方不是软件服务提供商、云服务提供商。
3. 您或您的关联方不存在将授予您的商用许可,未经百川许可二次授权给其他第三方的可能。
在符合以上条件的前提下,您需要通过以下联系邮箱 [email protected] ,提交《Baichuan 2 模型社区许可协议》要求的申请材料。审核通过后,百川将特此授予您一个非排他性、全球性、不可转让、不可再许可、可撤销的商用版权许可。
The community usage of Baichuan 2 model requires adherence to [Apache 2.0](https://github.com/baichuan-inc/Baichuan2/blob/main/LICENSE) and [Community License for Baichuan2 Model](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf). The Baichuan 2 model supports commercial use. If you plan to use the Baichuan 2 model or its derivatives for commercial purposes, please ensure that your entity meets the following conditions:
1. The Daily Active Users (DAU) of your or your affiliate's service or product is less than 1 million.
2. Neither you nor your affiliates are software service providers or cloud service providers.
3. There is no possibility for you or your affiliates to grant the commercial license given to you, to reauthorize it to other third parties without Baichuan's permission.
Upon meeting the above conditions, you need to submit the application materials required by the Baichuan 2 Model Community License Agreement via the following contact email: [email protected]. Once approved, Baichuan will hereby grant you a non-exclusive, global, non-transferable, non-sublicensable, revocable commercial copyright license.
[GitHub]:https://github.com/baichuan-inc/Baichuan2
[Baichuan2]:https://github.com/baichuan-inc/Baichuan2
[Baichuan-7B]:https://huggingface.co/baichuan-inc/Baichuan-7B
[Baichuan2-7B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base
[Baichuan2-7B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat
[Baichuan2-7B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat-4bits
[Baichuan-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan-13B-Base
[Baichuan2-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Base
[Baichuan2-13B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat
[Baichuan2-13B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits
[通用]:https://github.com/baichuan-inc/Baichuan2#%E9%80%9A%E7%94%A8%E9%A2%86%E5%9F%9F
[法律]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[医疗]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[数学]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[代码]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[多语言翻译]:https://github.com/baichuan-inc/Baichuan2#%E5%A4%9A%E8%AF%AD%E8%A8%80%E7%BF%BB%E8%AF%91
[《Baichuan 2 模型社区许可协议》]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf
[邮件申请]: mailto:[email protected]
[Email]: mailto:[email protected]
[[email protected]]: mailto:[email protected]
[训练过程heckpoint下载]: https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints
[百川智能]: https://www.baichuan-ai.com
|
hkivancoral/hushem_40x_deit_base_rms_0001_fold2
|
hkivancoral
| 2023-12-24T13:22:03Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T12:37:56Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_rms_0001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7555555555555555
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_rms_0001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0887
- Accuracy: 0.7556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0825 | 1.0 | 215 | 1.5281 | 0.7111 |
| 0.0311 | 2.0 | 430 | 1.2158 | 0.8 |
| 0.0011 | 3.0 | 645 | 1.8306 | 0.6889 |
| 0.0414 | 4.0 | 860 | 2.0416 | 0.7333 |
| 0.0002 | 5.0 | 1075 | 2.3340 | 0.6444 |
| 0.0027 | 6.0 | 1290 | 1.1579 | 0.7556 |
| 0.0001 | 7.0 | 1505 | 2.3412 | 0.6889 |
| 0.0 | 8.0 | 1720 | 2.3885 | 0.7111 |
| 0.0 | 9.0 | 1935 | 2.4917 | 0.7333 |
| 0.0 | 10.0 | 2150 | 2.6169 | 0.7333 |
| 0.0 | 11.0 | 2365 | 2.7660 | 0.7333 |
| 0.0 | 12.0 | 2580 | 2.9176 | 0.7333 |
| 0.0 | 13.0 | 2795 | 3.0652 | 0.7333 |
| 0.0 | 14.0 | 3010 | 3.1998 | 0.7556 |
| 0.0 | 15.0 | 3225 | 3.3068 | 0.7556 |
| 0.0 | 16.0 | 3440 | 3.4034 | 0.7556 |
| 0.0 | 17.0 | 3655 | 3.4958 | 0.7556 |
| 0.0 | 18.0 | 3870 | 3.5902 | 0.7556 |
| 0.0 | 19.0 | 4085 | 3.6748 | 0.7556 |
| 0.0 | 20.0 | 4300 | 3.7449 | 0.7556 |
| 0.0 | 21.0 | 4515 | 3.7990 | 0.7556 |
| 0.0 | 22.0 | 4730 | 3.8408 | 0.7556 |
| 0.0 | 23.0 | 4945 | 3.8743 | 0.7556 |
| 0.0 | 24.0 | 5160 | 3.9017 | 0.7556 |
| 0.0 | 25.0 | 5375 | 3.9247 | 0.7556 |
| 0.0 | 26.0 | 5590 | 3.9444 | 0.7556 |
| 0.0 | 27.0 | 5805 | 3.9616 | 0.7556 |
| 0.0 | 28.0 | 6020 | 3.9766 | 0.7556 |
| 0.0 | 29.0 | 6235 | 3.9899 | 0.7556 |
| 0.0 | 30.0 | 6450 | 4.0018 | 0.7556 |
| 0.0 | 31.0 | 6665 | 4.0124 | 0.7556 |
| 0.0 | 32.0 | 6880 | 4.0219 | 0.7556 |
| 0.0 | 33.0 | 7095 | 4.0305 | 0.7556 |
| 0.0 | 34.0 | 7310 | 4.0382 | 0.7556 |
| 0.0 | 35.0 | 7525 | 4.0452 | 0.7556 |
| 0.0 | 36.0 | 7740 | 4.0514 | 0.7556 |
| 0.0 | 37.0 | 7955 | 4.0571 | 0.7556 |
| 0.0 | 38.0 | 8170 | 4.0622 | 0.7556 |
| 0.0 | 39.0 | 8385 | 4.0668 | 0.7556 |
| 0.0 | 40.0 | 8600 | 4.0708 | 0.7556 |
| 0.0 | 41.0 | 8815 | 4.0744 | 0.7556 |
| 0.0 | 42.0 | 9030 | 4.0776 | 0.7556 |
| 0.0 | 43.0 | 9245 | 4.0803 | 0.7556 |
| 0.0 | 44.0 | 9460 | 4.0826 | 0.7556 |
| 0.0 | 45.0 | 9675 | 4.0846 | 0.7556 |
| 0.0 | 46.0 | 9890 | 4.0861 | 0.7556 |
| 0.0 | 47.0 | 10105 | 4.0873 | 0.7556 |
| 0.0 | 48.0 | 10320 | 4.0881 | 0.7556 |
| 0.0 | 49.0 | 10535 | 4.0886 | 0.7556 |
| 0.0 | 50.0 | 10750 | 4.0887 | 0.7556 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/hushem_40x_deit_tiny_sgd_001_fold3
|
hkivancoral
| 2023-12-24T13:21:44Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T12:51:42Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_sgd_001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8372093023255814
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_sgd_001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4899
- Accuracy: 0.8372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1798 | 1.0 | 217 | 1.2889 | 0.4186 |
| 1.0098 | 2.0 | 434 | 1.1067 | 0.6047 |
| 0.7827 | 3.0 | 651 | 0.9427 | 0.6977 |
| 0.6326 | 4.0 | 868 | 0.7917 | 0.6977 |
| 0.5443 | 5.0 | 1085 | 0.6647 | 0.7907 |
| 0.4438 | 6.0 | 1302 | 0.5810 | 0.8140 |
| 0.3761 | 7.0 | 1519 | 0.5185 | 0.8372 |
| 0.3386 | 8.0 | 1736 | 0.4778 | 0.8140 |
| 0.2796 | 9.0 | 1953 | 0.4431 | 0.8605 |
| 0.2037 | 10.0 | 2170 | 0.4372 | 0.8605 |
| 0.1624 | 11.0 | 2387 | 0.3943 | 0.8837 |
| 0.1477 | 12.0 | 2604 | 0.4019 | 0.8605 |
| 0.1485 | 13.0 | 2821 | 0.3856 | 0.8605 |
| 0.1192 | 14.0 | 3038 | 0.3686 | 0.8605 |
| 0.1115 | 15.0 | 3255 | 0.3722 | 0.8605 |
| 0.0891 | 16.0 | 3472 | 0.3567 | 0.8837 |
| 0.0776 | 17.0 | 3689 | 0.3631 | 0.8605 |
| 0.1039 | 18.0 | 3906 | 0.3600 | 0.8605 |
| 0.0608 | 19.0 | 4123 | 0.3514 | 0.8605 |
| 0.0639 | 20.0 | 4340 | 0.3706 | 0.8605 |
| 0.0555 | 21.0 | 4557 | 0.3773 | 0.8605 |
| 0.0552 | 22.0 | 4774 | 0.3713 | 0.8372 |
| 0.0457 | 23.0 | 4991 | 0.3749 | 0.8372 |
| 0.0383 | 24.0 | 5208 | 0.3901 | 0.8372 |
| 0.0332 | 25.0 | 5425 | 0.3933 | 0.8372 |
| 0.0322 | 26.0 | 5642 | 0.3995 | 0.8372 |
| 0.0278 | 27.0 | 5859 | 0.4012 | 0.8372 |
| 0.0212 | 28.0 | 6076 | 0.3938 | 0.8372 |
| 0.0224 | 29.0 | 6293 | 0.4080 | 0.8372 |
| 0.0218 | 30.0 | 6510 | 0.4237 | 0.8372 |
| 0.0278 | 31.0 | 6727 | 0.4231 | 0.8372 |
| 0.0212 | 32.0 | 6944 | 0.4330 | 0.8372 |
| 0.021 | 33.0 | 7161 | 0.4507 | 0.8372 |
| 0.0127 | 34.0 | 7378 | 0.4390 | 0.8372 |
| 0.0158 | 35.0 | 7595 | 0.4566 | 0.8372 |
| 0.0178 | 36.0 | 7812 | 0.4594 | 0.8372 |
| 0.0109 | 37.0 | 8029 | 0.4570 | 0.8372 |
| 0.0096 | 38.0 | 8246 | 0.4635 | 0.8372 |
| 0.0113 | 39.0 | 8463 | 0.4700 | 0.8372 |
| 0.0149 | 40.0 | 8680 | 0.4815 | 0.8372 |
| 0.0111 | 41.0 | 8897 | 0.4769 | 0.8372 |
| 0.0075 | 42.0 | 9114 | 0.4756 | 0.8372 |
| 0.0093 | 43.0 | 9331 | 0.4800 | 0.8372 |
| 0.009 | 44.0 | 9548 | 0.4851 | 0.8372 |
| 0.0065 | 45.0 | 9765 | 0.4808 | 0.8372 |
| 0.011 | 46.0 | 9982 | 0.4835 | 0.8372 |
| 0.0064 | 47.0 | 10199 | 0.4871 | 0.8372 |
| 0.0093 | 48.0 | 10416 | 0.4902 | 0.8372 |
| 0.0136 | 49.0 | 10633 | 0.4899 | 0.8372 |
| 0.0058 | 50.0 | 10850 | 0.4899 | 0.8372 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
aloobun/bun_mistral_7b_v2
|
aloobun
| 2023-12-24T13:21:21Z | 1,654 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"CoT",
"en",
"license:cc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-20T03:13:51Z |
---
language:
- en
tags:
- CoT
license: cc
---
finetuned of mistralai/Mistral-7B-v0.1 for CoT reasoning
- gptq : [TheBloke/bun_mistral_7b_v2-GPTQ](https://huggingface.co/TheBloke/bun_mistral_7b_v2-GPTQ)
- awq : [TheBloke/bun_mistral_7b_v2-AWQ](https://huggingface.co/TheBloke/bun_mistral_7b_v2-AWQ)
- gguf : [TheBloke/bun_mistral_7b_v2-GGUF](https://huggingface.co/TheBloke/bun_mistral_7b_v2-GGUF)
Fine-tuning language models is like tuning the strings of an AI banjo in the cosmic saloon of the digital frontier. We're not just slinging code; it's a harmonious quest to shape the minds of silicon wanderers, crafting binary ballads and electronic echoes. Picture it as cybernetic bardic magic, where we, the tech sorcerers, weave algorithms with strands of imagination. But, in this cosmic hoedown, there's a twist – as we twang the strings of artificial intelligence, we're also seeding the algorithms with a bit of human stardust, adding quirks and quirksome biases. So, as we two-step into this frontier of creation, are we summoning AI troubadours of the future or just conjuring interstellar jesters, spinning tales of silicon whimsy and digital campfire banter?
|
hkivancoral/hushem_40x_deit_base_rms_001_fold2
|
hkivancoral
| 2023-12-24T13:19:08Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T12:36:59Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_rms_001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5777777777777777
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_rms_001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 8.5594
- Accuracy: 0.5778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1732 | 1.0 | 215 | 1.0108 | 0.4667 |
| 0.7763 | 2.0 | 430 | 1.2138 | 0.5333 |
| 0.7021 | 3.0 | 645 | 1.2446 | 0.4 |
| 0.6002 | 4.0 | 860 | 1.7707 | 0.4444 |
| 0.4988 | 5.0 | 1075 | 2.1116 | 0.4667 |
| 0.4269 | 6.0 | 1290 | 2.3849 | 0.5556 |
| 0.3366 | 7.0 | 1505 | 2.4322 | 0.5556 |
| 0.2961 | 8.0 | 1720 | 3.2646 | 0.5556 |
| 0.2377 | 9.0 | 1935 | 3.1438 | 0.5333 |
| 0.2435 | 10.0 | 2150 | 3.6031 | 0.5778 |
| 0.2593 | 11.0 | 2365 | 3.5951 | 0.4889 |
| 0.1482 | 12.0 | 2580 | 3.8372 | 0.5111 |
| 0.1871 | 13.0 | 2795 | 3.7490 | 0.6222 |
| 0.1246 | 14.0 | 3010 | 3.7977 | 0.5333 |
| 0.166 | 15.0 | 3225 | 3.7321 | 0.5778 |
| 0.1672 | 16.0 | 3440 | 4.6413 | 0.4889 |
| 0.1752 | 17.0 | 3655 | 4.9330 | 0.5556 |
| 0.1214 | 18.0 | 3870 | 4.3615 | 0.5556 |
| 0.0488 | 19.0 | 4085 | 4.4231 | 0.5111 |
| 0.1336 | 20.0 | 4300 | 4.4451 | 0.5778 |
| 0.1002 | 21.0 | 4515 | 3.7455 | 0.5778 |
| 0.0734 | 22.0 | 4730 | 4.4970 | 0.5556 |
| 0.0322 | 23.0 | 4945 | 4.8990 | 0.5333 |
| 0.214 | 24.0 | 5160 | 5.1865 | 0.5778 |
| 0.1242 | 25.0 | 5375 | 5.0088 | 0.5333 |
| 0.0033 | 26.0 | 5590 | 4.9606 | 0.5556 |
| 0.0333 | 27.0 | 5805 | 4.4063 | 0.5778 |
| 0.0592 | 28.0 | 6020 | 4.1719 | 0.5556 |
| 0.0444 | 29.0 | 6235 | 6.2342 | 0.5111 |
| 0.0039 | 30.0 | 6450 | 5.9834 | 0.5333 |
| 0.003 | 31.0 | 6665 | 6.2329 | 0.5333 |
| 0.0008 | 32.0 | 6880 | 6.2499 | 0.6 |
| 0.1078 | 33.0 | 7095 | 5.2542 | 0.6222 |
| 0.0258 | 34.0 | 7310 | 6.7980 | 0.4889 |
| 0.0052 | 35.0 | 7525 | 6.6849 | 0.5333 |
| 0.0003 | 36.0 | 7740 | 6.1342 | 0.5556 |
| 0.0005 | 37.0 | 7955 | 5.4920 | 0.5778 |
| 0.0004 | 38.0 | 8170 | 5.3684 | 0.5778 |
| 0.0148 | 39.0 | 8385 | 5.3551 | 0.5556 |
| 0.0054 | 40.0 | 8600 | 7.4300 | 0.5111 |
| 0.0 | 41.0 | 8815 | 6.8539 | 0.5556 |
| 0.0 | 42.0 | 9030 | 6.8688 | 0.5556 |
| 0.0 | 43.0 | 9245 | 7.1702 | 0.5778 |
| 0.0 | 44.0 | 9460 | 7.4631 | 0.5778 |
| 0.0 | 45.0 | 9675 | 7.7338 | 0.5778 |
| 0.0 | 46.0 | 9890 | 7.9825 | 0.5778 |
| 0.0 | 47.0 | 10105 | 8.2172 | 0.5778 |
| 0.0 | 48.0 | 10320 | 8.4047 | 0.5778 |
| 0.0 | 49.0 | 10535 | 8.5267 | 0.5778 |
| 0.0 | 50.0 | 10750 | 8.5594 | 0.5778 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
anikur93/Taxi-v3
|
anikur93
| 2023-12-24T13:15:34Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-24T13:09:28Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="anikur93/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MadFritz/rl_course_vizdoom_health_gathering_supreme
|
MadFritz
| 2023-12-24T13:05:18Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-24T13:05:12Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.74 +/- 6.89
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r MadFritz/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Sigurdur/isl-sbert-m
|
Sigurdur
| 2023-12-24T13:02:45Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"is",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-12-24T12:58:33Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- is
---
# Icelandic SBERT for Sentence Embedding
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Data
The model was trained on 300 000 sentences, selected at random from clarin-is: [unanotated news2 from IGC(RMH)](https://repository.clarin.is/repository/xmlui/handle/20.500.12537/238)
to install the data, run the following command:
```bash
curl --remote-name-all https://repository.clarin.is/repository/xmlui/bitstream/handle/20.500.12537/238{/IGC-News2-22.10.TEI.zip}
```
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 150000 with parameters:
```
{'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"scheduler": "constantlr",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
Sigurdur Haukur Birgisson
|
anikur93/q-FrozenLake-v1-4x4-noSlippery
|
anikur93
| 2023-12-24T13:01:56Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-24T13:01:50Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="anikur93/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Xiugapurin/codeparrot-ds
|
Xiugapurin
| 2023-12-24T13:01:35Z | 15 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlnet",
"text-generation",
"generated_from_trainer",
"base_model:hfl/chinese-xlnet-base",
"base_model:finetune:hfl/chinese-xlnet-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-24T12:50:24Z |
---
license: apache-2.0
base_model: hfl/chinese-xlnet-base
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [hfl/chinese-xlnet-base](https://huggingface.co/hfl/chinese-xlnet-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
mlx-community/TinyLlama-1.1B-Chat-v0.6
|
mlx-community
| 2023-12-24T12:51:45Z | 20 | 2 |
mlx
|
[
"mlx",
"llama",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T",
"license:apache-2.0",
"region:us"
] | null | 2023-12-21T10:50:21Z |
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- OpenAssistant/oasst_top1_2023-08-25
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T
language:
- en
library_name: mlx
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. This repository contains the TinyLlama-1.1B-Chat-v0.6 weights in npz format suitable for use with Apple's MLX framework. For more information about the model, please review [its model card](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6)
#### How to use
```
pip install mlx
pip install huggingface_hub
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples
huggingface-cli download --local-dir-use-symlinks False --local-dir tinyllama-1.1B-Chat-v0.6 mlx-community/tinyllama-1.1B-Chat-v0.6
# Run example
python llms/llama/llama.py --model-path tinyllama-1.1B-Chat-v0.6 --prompt "My name is"
```
|
hkivancoral/hushem_40x_deit_tiny_sgd_001_fold2
|
hkivancoral
| 2023-12-24T12:51:33Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T12:21:46Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_sgd_001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6888888888888889
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_sgd_001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0440
- Accuracy: 0.6889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0975 | 1.0 | 215 | 1.3370 | 0.3778 |
| 0.8761 | 2.0 | 430 | 1.2895 | 0.4444 |
| 0.7359 | 3.0 | 645 | 1.2565 | 0.4889 |
| 0.6277 | 4.0 | 860 | 1.2398 | 0.5556 |
| 0.5094 | 5.0 | 1075 | 1.2052 | 0.5556 |
| 0.4187 | 6.0 | 1290 | 1.1950 | 0.5778 |
| 0.3909 | 7.0 | 1505 | 1.1310 | 0.6 |
| 0.3137 | 8.0 | 1720 | 1.1412 | 0.5556 |
| 0.2817 | 9.0 | 1935 | 1.0706 | 0.5778 |
| 0.2108 | 10.0 | 2150 | 1.0537 | 0.6 |
| 0.1785 | 11.0 | 2365 | 1.0606 | 0.5778 |
| 0.1677 | 12.0 | 2580 | 1.0202 | 0.5778 |
| 0.1602 | 13.0 | 2795 | 1.0251 | 0.5778 |
| 0.1355 | 14.0 | 3010 | 1.0164 | 0.6 |
| 0.1234 | 15.0 | 3225 | 1.0019 | 0.5778 |
| 0.0937 | 16.0 | 3440 | 0.9960 | 0.6 |
| 0.0963 | 17.0 | 3655 | 0.9708 | 0.5778 |
| 0.0998 | 18.0 | 3870 | 0.9907 | 0.5778 |
| 0.0604 | 19.0 | 4085 | 0.9932 | 0.6 |
| 0.0724 | 20.0 | 4300 | 0.9792 | 0.5556 |
| 0.0616 | 21.0 | 4515 | 0.9528 | 0.5556 |
| 0.0591 | 22.0 | 4730 | 0.9741 | 0.5556 |
| 0.0433 | 23.0 | 4945 | 0.9824 | 0.5556 |
| 0.0476 | 24.0 | 5160 | 0.9907 | 0.5556 |
| 0.0326 | 25.0 | 5375 | 0.9714 | 0.5778 |
| 0.0325 | 26.0 | 5590 | 0.9834 | 0.6 |
| 0.0352 | 27.0 | 5805 | 0.9903 | 0.5778 |
| 0.0319 | 28.0 | 6020 | 0.9831 | 0.5778 |
| 0.0242 | 29.0 | 6235 | 0.9872 | 0.6 |
| 0.0238 | 30.0 | 6450 | 1.0027 | 0.6222 |
| 0.0166 | 31.0 | 6665 | 0.9985 | 0.5778 |
| 0.0151 | 32.0 | 6880 | 1.0088 | 0.6 |
| 0.0176 | 33.0 | 7095 | 1.0180 | 0.6 |
| 0.0221 | 34.0 | 7310 | 1.0038 | 0.6444 |
| 0.0159 | 35.0 | 7525 | 0.9868 | 0.6667 |
| 0.0115 | 36.0 | 7740 | 1.0104 | 0.6444 |
| 0.017 | 37.0 | 7955 | 1.0128 | 0.6889 |
| 0.0105 | 38.0 | 8170 | 1.0250 | 0.6444 |
| 0.0144 | 39.0 | 8385 | 1.0115 | 0.6889 |
| 0.0092 | 40.0 | 8600 | 1.0202 | 0.6667 |
| 0.0131 | 41.0 | 8815 | 1.0296 | 0.6444 |
| 0.0108 | 42.0 | 9030 | 1.0274 | 0.6889 |
| 0.0089 | 43.0 | 9245 | 1.0423 | 0.6889 |
| 0.0153 | 44.0 | 9460 | 1.0420 | 0.6889 |
| 0.0077 | 45.0 | 9675 | 1.0387 | 0.6667 |
| 0.0096 | 46.0 | 9890 | 1.0413 | 0.6889 |
| 0.0073 | 47.0 | 10105 | 1.0431 | 0.6889 |
| 0.0112 | 48.0 | 10320 | 1.0453 | 0.6889 |
| 0.0085 | 49.0 | 10535 | 1.0438 | 0.6889 |
| 0.01 | 50.0 | 10750 | 1.0440 | 0.6889 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
nurcan/turkishReviews-ds-mini
|
nurcan
| 2023-12-24T12:50:08Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-23T18:53:50Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_keras_callback
model-index:
- name: turkishReviews-ds-mini
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# turkishReviews-ds-mini
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.2671
- Validation Loss: 8.7544
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -896, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 9.0918 | 9.2015 | 0 |
| 8.6097 | 8.9164 | 1 |
| 8.2671 | 8.7544 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.0
- Tokenizers 0.15.0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_RandomError0.0_Seed104
|
behzadnet
| 2023-12-24T12:47:48Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-24T12:47:44Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_RandomError0.0_Seed104
|
behzadnet
| 2023-12-24T12:47:38Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-24T12:47:33Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
Yura32000/my_awesome_food_model
|
Yura32000
| 2023-12-24T12:45:06Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T12:36:41Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_food_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6394
- Accuracy: 0.896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7761 | 0.99 | 62 | 2.5927 | 0.824 |
| 1.8745 | 2.0 | 125 | 1.8134 | 0.868 |
| 1.5945 | 2.98 | 186 | 1.6394 | 0.896 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
Q-bert/Merged-AGI-7B
|
Q-bert
| 2023-12-24T12:41:18Z | 56 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"Math",
"merge",
"en",
"dataset:meta-math/MetaMathQA",
"base_model:Q-bert/MetaMath-Cybertron-Starling",
"base_model:merge:Q-bert/MetaMath-Cybertron-Starling",
"base_model:fblgit/juanako-7b-UNA",
"base_model:merge:fblgit/juanako-7b-UNA",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-10T09:20:47Z |
---
license: cc-by-nc-4.0
datasets:
- meta-math/MetaMathQA
language:
- en
pipeline_tag: text-generation
tags:
- Math
- merge
base_model:
- Q-bert/MetaMath-Cybertron-Starling
- fblgit/juanako-7b-UNA
---
## Merged-AGI-7B
Merge [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) and [fblgit/juanako-7b-UNA](https://huggingface.co/fblgit/juanako-7b-UNA) using slerp merge.
You can use ChatML format.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [Coming soon]()
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | Coming soon |
| ARC (25-shot) | Coming soon |
| HellaSwag (10-shot) | Coming soon |
| MMLU (5-shot) | Coming soon |
| TruthfulQA (0-shot) | Coming soon |
| Winogrande (5-shot) | Coming soon |
| GSM8K (5-shot) | Coming soon |
|
Chhaya/results
|
Chhaya
| 2023-12-24T12:34:29Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-12-24T12:32:50Z |
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: NousResearch/Llama-2-7b-chat-hf
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
FirstLast/RealisticVision-LoRA-lidrs-4.3
|
FirstLast
| 2023-12-24T12:31:44Z | 2 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:SG161222/Realistic_Vision_V5.1_noVAE",
"base_model:adapter:SG161222/Realistic_Vision_V5.1_noVAE",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-12-24T11:55:56Z |
---
license: creativeml-openrail-m
base_model: SG161222/Realistic_Vision_V5.1_noVAE
instance_prompt: a lidrs dress
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - FirstLast/RealisticVision-LoRA-lidrs-4.3
These are LoRA adaption weights for SG161222/Realistic_Vision_V5.1_noVAE. The weights were trained on a lidrs dress using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
HaohuaLv/retina-backbone_resnet50-ft_widerface
|
HaohuaLv
| 2023-12-24T12:23:43Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2023-12-24T12:13:04Z |
# [Retinaface Face Detection](https://github.com/HaohuaLv/retinaface-face-detection)
A retinaface model for Face Detection trained on widerface dataset.
Notice: This is face detection model's training, evaluation and inference scripts in HuggingFace🤗 style from scratch for practice.
## Train
Run
```bash
python train.py --model_config_file <MODEL_CONFIG_FILE>
```
<MODEL_CONFIG_FILE> can be found in folder `config`.
Model checkpoints will be saved in folder `checkpoints` by default.
backbone-ResNet50 checkpoint can be download in my [Google Drive](https://drive.google.com/drive/folders/1teN75lXOvYPLdpzLoXPEPrsXfZJU18Id?usp=sharing) or [HuggingFace🤗](https://huggingface.co/HaohuaLv/retina-backbone_resnet50-ft_widerface).
## Inference
### Observe logits map and predicted bboxes
Run
```bash
python inference.py --checkpoint_path <CHECKPOINT_PATH>
```
<CHECKPOINT_PATH> is a model folder containing `config.json` and `pytorch_model.bin`.

### Detect
Run
```bash
python detect.py --checkpoint_path <CHECKPOINT_PATH> --image_path <IMAGE_PATH> --save_path <SAVE_PATH>
```

## References
- [Retinface-pytorch](https://github.com/biubug6/Pytorch_Retinaface)
|
micdestefano/a2c-PandaReachDense-v3
|
micdestefano
| 2023-12-24T12:12:03Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-24T12:09:37Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jgodding/ppo-LunarLander-v2
|
jgodding
| 2023-12-24T12:10:50Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-24T12:10:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.91 +/- 17.20
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MadFritz/ppo-LunarLander
|
MadFritz
| 2023-12-24T12:03:53Z | 0 | 0 | null |
[
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-24T12:03:48Z |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 169.30 +/- 92.74
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'ppo-LunarLander'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'MadFritz/ppo-LunarLander'
'batch_size': 512
'minibatch_size': 128}
```
|
GordonMcGregor/stable-diffusion-xl-base-1.0-lora-TOK-Gordon_dec_24
|
GordonMcGregor
| 2023-12-24T11:59:14Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-24T07:13:19Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'a photo of TOK man in a fedora'
output:
url:
"image_0.png"
- text: 'a photo of TOK man in a fedora'
output:
url:
"image_1.png"
- text: 'a photo of TOK man in a fedora'
output:
url:
"image_2.png"
- text: 'a photo of TOK man in a fedora'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK man
license: openrail++
---
# SDXL LoRA DreamBooth - GordonMcGregor/stable-diffusion-xl-base-1.0-lora-TOK-Gordon_dec_24
<Gallery />
## Model description
These are GordonMcGregor/stable-diffusion-xl-base-1.0-lora-TOK-Gordon_dec_24 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK man to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](GordonMcGregor/stable-diffusion-xl-base-1.0-lora-TOK-Gordon_dec_24/tree/main) them in the Files & versions tab.
|
lemoneresearch/tsdae-lemone-mbert-tax
|
lemoneresearch
| 2023-12-24T11:49:09Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"legal",
"french-law",
"droit français",
"tax",
"droit fiscal",
"fiscalité",
"fr",
"dataset:louisbrulenaudet/lpf",
"dataset:louisbrulenaudet/cgi",
"dataset:louisbrulenaudet/code-douanes",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-12-24T11:31:40Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- legal
- french-law
- droit français
- tax
- droit fiscal
- fiscalité
license: apache-2.0
pretty_name: Domain-adapted mBERT for French Tax Practice
datasets:
- louisbrulenaudet/lpf
- louisbrulenaudet/cgi
- louisbrulenaudet/code-douanes
language:
- fr
library_name: sentence-transformers
---
# Domain-adapted mBERT for French Tax Practice
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
Pretrained transformers model on the top 102 languages with the largest Wikipedia using a masked language modeling (MLM) objective, fitted using Transformer-based Sequential Denoising Auto-Encoder for unsupervised sentence embedding learning with one objective : french tax domain adaptation.
This way, the model learns an inner representation of the french legal language in the training set that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the model as inputs.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer("louisbrulenaudet/tsdae-lemone-mbert-tax")
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("louisbrulenaudet/tsdae-lemone-mbert-tax")
model = AutoModel.from_pretrained("louisbrulenaudet/tsdae-lemone-mbert-tax")
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input["attention_mask"])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5507 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"scheduler": "constantlr",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
If you use this code in your research, please use the following BibTeX entry.
```BibTeX
@misc{louisbrulenaudet2023,
author = {Louis Brulé Naudet},
title = {Domain-adapted mBERT for French Tax Practice},
year = {2023}
howpublished = {\url{https://huggingface.co/louisbrulenaudet/tsdae-lemone-mbert-tax}},
}
```
## Feedback
If you have any feedback, please reach out at [[email protected]](mailto:[email protected]).
|
csujeong/Falcon-7b-Finetuned-Financial-Stock
|
csujeong
| 2023-12-24T11:48:40Z | 7 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | 2023-12-24T11:40:14Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: tiiuae/falcon-7b
model-index:
- name: Falcon-7b-Finetuned-Financial-Stock
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Falcon-7b-Finetuned-Financial-Stock
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 60
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
chanhua/autotrain-6uoy3-zwdlp
|
chanhua
| 2023-12-24T11:47:13Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:chanhua/autotrain-data-autotrain-6uoy3-zwdlp",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T11:47:00Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- chanhua/autotrain-data-autotrain-6uoy3-zwdlp
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.031825795644891124
f1_micro: 0.10555555555555556
f1_weighted: 0.020156337241764376
precision_macro: 0.017592592592592594
precision_micro: 0.10555555555555556
precision_weighted: 0.011141975308641975
recall_macro: 0.16666666666666666
recall_micro: 0.10555555555555556
recall_weighted: 0.10555555555555556
accuracy: 0.10555555555555556
|
YagiASAFAS/distilbert-base-uncased-finetuned-emotion
|
YagiASAFAS
| 2023-12-24T11:45:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-24T11:02:58Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1710
- eval_accuracy: 0.9295
- eval_f1: 0.9302
- eval_runtime: 11.2289
- eval_samples_per_second: 178.112
- eval_steps_per_second: 2.85
- epoch: 1.0
- step: 250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231211
- Datasets 2.15.0
- Tokenizers 0.11.0
|
Realgon/N_roberta_imdb_padding20model
|
Realgon
| 2023-12-24T11:42:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-24T09:22:17Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: N_roberta_imdb_padding20model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.95256
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# N_roberta_imdb_padding20model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5709
- Accuracy: 0.9526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2052 | 1.0 | 1563 | 0.1966 | 0.9395 |
| 0.1578 | 2.0 | 3126 | 0.1547 | 0.9501 |
| 0.1132 | 3.0 | 4689 | 0.2315 | 0.9490 |
| 0.0801 | 4.0 | 6252 | 0.2392 | 0.9478 |
| 0.0455 | 5.0 | 7815 | 0.3256 | 0.9475 |
| 0.0377 | 6.0 | 9378 | 0.3895 | 0.9394 |
| 0.0299 | 7.0 | 10941 | 0.3465 | 0.9486 |
| 0.0199 | 8.0 | 12504 | 0.3895 | 0.9427 |
| 0.0232 | 9.0 | 14067 | 0.3813 | 0.945 |
| 0.0158 | 10.0 | 15630 | 0.4284 | 0.9476 |
| 0.0122 | 11.0 | 17193 | 0.4631 | 0.943 |
| 0.0094 | 12.0 | 18756 | 0.4639 | 0.9500 |
| 0.0074 | 13.0 | 20319 | 0.4256 | 0.9509 |
| 0.0032 | 14.0 | 21882 | 0.4599 | 0.9520 |
| 0.002 | 15.0 | 23445 | 0.5557 | 0.949 |
| 0.0025 | 16.0 | 25008 | 0.5381 | 0.9490 |
| 0.0018 | 17.0 | 26571 | 0.5017 | 0.9514 |
| 0.0008 | 18.0 | 28134 | 0.5676 | 0.9506 |
| 0.0 | 19.0 | 29697 | 0.5757 | 0.9519 |
| 0.0018 | 20.0 | 31260 | 0.5709 | 0.9526 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ThuyNT03/KLTN_COQE_viT5_total_SPAOL_v4
|
ThuyNT03
| 2023-12-24T11:27:43Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-24T09:45:46Z |
---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_total_SPAOL_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_total_SPAOL_v4
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
clewiston/autotrain-vlxo9-2s7eh
|
clewiston
| 2023-12-24T11:16:20Z | 27 | 0 |
transformers
|
[
"transformers",
"safetensors",
"resnet",
"image-classification",
"autotrain",
"dataset:clewiston/autotrain-data-autotrain-vlxo9-2s7eh",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T11:15:32Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- clewiston/autotrain-data-autotrain-vlxo9-2s7eh
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 1.347914457321167
f1_macro: 0.196969696969697
f1_micro: 0.65
f1_weighted: 0.5121212121212122
precision_macro: 0.1625
precision_micro: 0.65
precision_weighted: 0.42250000000000004
recall_macro: 0.25
recall_micro: 0.65
recall_weighted: 0.65
accuracy: 0.65
|
Vsukiyaki/Yaki-Dofu-Mix
|
Vsukiyaki
| 2023-12-24T11:07:09Z | 33 | 8 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"ja",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-23T09:26:19Z |
---
license: creativeml-openrail-m
language:
- ja
- en
tags:
- stable-diffusion
- text-to-image
---
# Yaki-Dofu-Mix
<img src="https://huggingface.co/Vsukiyaki/Yaki-Dofu-Mix/resolve/main/imgs/Yaki-Dofu-Mix.png" style="width: 768px;">
## 概要 / Overview
- **Yaki-Dofu-Mix**は、アニメ風の画風に特化したマージモデルです。 / **Yaki-Dofu-Mix** is a merge model that specializes in an anime-like painting style.
- VAEなしでも鮮やかな色合いで出力されます。 / The output will be vividly tinted without VAE.
<hr>
## ライセンス / License
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base text-bold" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span style="font-size: 18px;">
✅
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span style="font-size: 18px;">
🚫
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span style="font-size: 18px;">
🚫
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span style="font-size: 18px;">
✅
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span style="font-size: 18px;">
🚫
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span style="font-size: 18px;">
🚫
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
<hr>
## 推奨設定 / Recommended Settings
<pre style="margin: 1em 0; padding: 1em; border-radius: 5px; white-space: pre-line;">
Steps: 20 ~ 60
Sampler: DPM++ 3M SDE Exponential
CFG scale: 7.5
Denoising strength: 0.55
Hires steps: 20
Hires upscaler: R-ESRGAN 4x+ Anime6B
Clip skip: 2
</pre>
Negative:
<pre style="margin: 1em 0; padding: 1em; border-radius: 5px; white-space: pre-line;">
(easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad, (inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95),verybadimagenegative_v1.3,
</pre>
<hr>
## 例 / Examples
<div class="flex justify-center">
<div class="container mx-auto px-2">
<div class="flex flex-wrap min-w-min items-baseline">
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/Vsukiyaki/Yaki-Dofu-Mix/resolve/main/imgs/sample01.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
(solo:1.2),cute girl,(pink short hair),(casual wavy hair:1.3), blunt bangs,blush,head tilt,upper body,black cap,oversized black t-shirt,simple background,white background,cowboy shot,shadow,choker,
Negative prompt: (easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad, (inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95),verybadimagenegative_v1.3,
Steps: 60,
Sampler: DPM++ 3M SDE Exponential,
CFG scale: 7.5,
Seed: 1452497008,
Size: 768x768,
Denoising strength: 0.55,
Clip skip: 2,
Hires upscale: 2.5,
Hires steps: 20,
Hires upscaler: R-ESRGAN 4x+ Anime6B,
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/Vsukiyaki/Yaki-Dofu-Mix/resolve/main/imgs/sample02.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
night,cute girl against wall in the downtown,solo,from side,pink hair,(casual wavy hair:1.3),blunt bangs,duffel coat,plaid skirt,scarf,blush,(depth of field:1.3),(night view),dynamic angle,outdoor,cowboy shot,
Negative prompt: (easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad, (inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95),verybadimagenegative_v1.3,
Steps: 60,
Sampler: DPM++ 3M SDE Exponential,
CFG scale: 7.5,
Seed: 3362678745,
Size: 760x768,
Denoising strength: 0.55,
Clip skip: 2,
Hires upscale: 2.5,
Hires steps: 20,
Hires upscaler: R-ESRGAN 4x+ Anime6B,
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/Vsukiyaki/Yaki-Dofu-Mix/resolve/main/imgs/sample03.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
((solo:1.2)),cute girl sitting on bench in garden,frilled dirndl,from above,looking up,cobblestone pavement,aqua hair,fine bob cut,(hair over one eye),(dappled sunlight:1.2),blurry,(depth of field:1.1),head tilt,:o,(petals),tree,butterfly
Negative prompt: (easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad, (inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95),verybadimagenegative_v1.3,
Steps: 60,
Sampler: DPM++ 3M SDE Exponential,
CFG scale: 7.5,
Seed: 617162279,
Size: 760x768,
Denoising strength: 0.55,
Clip skip: 2,
Hires upscale: 2.5,
Hires steps: 20,
Hires upscaler: R-ESRGAN 4x+ Anime6B,
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/Vsukiyaki/Yaki-Dofu-Mix/resolve/main/imgs/sample04.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
cute girl standing on a beautiful beach,white t-shirt,(brown hair:1.3,brown eyes),(casual wavy long hair:1.3),splash,looking at viewer,upper body,sunset view,chromatic aberration,(depth of field:1.3),cinematic lighting,serenity,wind
Negative prompt: (easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad, (inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95),verybadimagenegative_v1.3,
Steps: 60,
Sampler: DPM++ 3M SDE Exponential,
CFG scale: 7.5,
Seed: 1118141335,
Size: 768x768,
Denoising strength: 0.55,
Clip skip: 2,
Hires upscale: 2.5,
Hires steps: 20,
Hires upscaler: R-ESRGAN 4x+ Anime6B,
</pre>
</div>
</div>
</div>
</div>
</div>
<hr>
Twiter: [@Vsukiyaki_AIArt](https://twitter.com/Vsukiyaki_AIArt)
<a
href="https://twitter.com/Vsukiyaki_AIArt"
class="mb-2 inline-block rounded px-6 py-2.5 text-white shadow-md"
style="background-color: #1da1f2">
<svg xmlns="http://www.w3.org/2000/svg" class="h-3.5 w-3.5" fill="currentColor" viewBox="0 0 24 24">
<path d="M24 4.557c-.883.392-1.832.656-2.828.775 1.017-.609 1.798-1.574 2.165-2.724-.951.564-2.005.974-3.127 1.195-.897-.957-2.178-1.555-3.594-1.555-3.179 0-5.515 2.966-4.797 6.045-4.091-.205-7.719-2.165-10.148-5.144-1.29 2.213-.669 5.108 1.523 6.574-.806-.026-1.566-.247-2.229-.616-.054 2.281 1.581 4.415 3.949 4.89-.693.188-1.452.232-2.224.084.626 1.956 2.444 3.379 4.6 3.419-2.07 1.623-4.678 2.348-7.29 2.04 2.179 1.397 4.768 2.212 7.548 2.212 9.142 0 14.307-7.721 13.995-14.646.962-.695 1.797-1.562 2.457-2.549z" />
</svg>
</a>
|
hkivancoral/hushem_40x_deit_tiny_adamax_00001_fold5
|
hkivancoral
| 2023-12-24T11:07:01Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T10:31:11Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_adamax_00001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8536585365853658
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_adamax_00001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8352
- Accuracy: 0.8537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3813 | 1.0 | 220 | 0.6093 | 0.7561 |
| 0.131 | 2.0 | 440 | 0.4372 | 0.8293 |
| 0.0714 | 3.0 | 660 | 0.6223 | 0.7805 |
| 0.0083 | 4.0 | 880 | 0.5773 | 0.8537 |
| 0.0038 | 5.0 | 1100 | 0.5967 | 0.8537 |
| 0.0013 | 6.0 | 1320 | 0.7213 | 0.8537 |
| 0.0005 | 7.0 | 1540 | 0.6555 | 0.8537 |
| 0.0003 | 8.0 | 1760 | 0.7129 | 0.8537 |
| 0.0002 | 9.0 | 1980 | 0.6903 | 0.8537 |
| 0.0001 | 10.0 | 2200 | 0.7139 | 0.8537 |
| 0.0001 | 11.0 | 2420 | 0.7461 | 0.8537 |
| 0.0001 | 12.0 | 2640 | 0.7296 | 0.8537 |
| 0.0001 | 13.0 | 2860 | 0.7461 | 0.8537 |
| 0.0001 | 14.0 | 3080 | 0.7537 | 0.8537 |
| 0.0 | 15.0 | 3300 | 0.7347 | 0.8537 |
| 0.0 | 16.0 | 3520 | 0.7586 | 0.8537 |
| 0.0 | 17.0 | 3740 | 0.7585 | 0.8537 |
| 0.0 | 18.0 | 3960 | 0.7603 | 0.8537 |
| 0.0 | 19.0 | 4180 | 0.7375 | 0.8537 |
| 0.0 | 20.0 | 4400 | 0.7584 | 0.8537 |
| 0.0 | 21.0 | 4620 | 0.7582 | 0.8537 |
| 0.0 | 22.0 | 4840 | 0.7660 | 0.8537 |
| 0.0 | 23.0 | 5060 | 0.7826 | 0.8537 |
| 0.0 | 24.0 | 5280 | 0.7552 | 0.8537 |
| 0.0 | 25.0 | 5500 | 0.7401 | 0.8537 |
| 0.0 | 26.0 | 5720 | 0.7783 | 0.8537 |
| 0.0 | 27.0 | 5940 | 0.7654 | 0.8537 |
| 0.0 | 28.0 | 6160 | 0.7518 | 0.8537 |
| 0.0 | 29.0 | 6380 | 0.7644 | 0.8537 |
| 0.0 | 30.0 | 6600 | 0.7962 | 0.8537 |
| 0.0 | 31.0 | 6820 | 0.8050 | 0.8537 |
| 0.0 | 32.0 | 7040 | 0.7846 | 0.8537 |
| 0.0 | 33.0 | 7260 | 0.7663 | 0.8537 |
| 0.0 | 34.0 | 7480 | 0.7669 | 0.8780 |
| 0.0 | 35.0 | 7700 | 0.7816 | 0.8780 |
| 0.0 | 36.0 | 7920 | 0.7902 | 0.8537 |
| 0.0 | 37.0 | 8140 | 0.7775 | 0.8537 |
| 0.0 | 38.0 | 8360 | 0.8004 | 0.8537 |
| 0.0 | 39.0 | 8580 | 0.7724 | 0.8537 |
| 0.0 | 40.0 | 8800 | 0.7795 | 0.8780 |
| 0.0 | 41.0 | 9020 | 0.8084 | 0.8537 |
| 0.0 | 42.0 | 9240 | 0.8224 | 0.8537 |
| 0.0 | 43.0 | 9460 | 0.8366 | 0.8293 |
| 0.0 | 44.0 | 9680 | 0.8236 | 0.8780 |
| 0.0 | 45.0 | 9900 | 0.8365 | 0.8293 |
| 0.0 | 46.0 | 10120 | 0.8207 | 0.8537 |
| 0.0 | 47.0 | 10340 | 0.8439 | 0.8293 |
| 0.0 | 48.0 | 10560 | 0.8465 | 0.8537 |
| 0.0 | 49.0 | 10780 | 0.8311 | 0.8537 |
| 0.0 | 50.0 | 11000 | 0.8352 | 0.8537 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/hushem_40x_deit_base_sgd_0001_fold4
|
hkivancoral
| 2023-12-24T11:00:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T10:19:12Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_sgd_0001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.42857142857142855
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_sgd_0001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2151
- Accuracy: 0.4286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.3918 | 1.0 | 219 | 1.4045 | 0.3095 |
| 1.3704 | 2.0 | 438 | 1.3956 | 0.3095 |
| 1.3491 | 3.0 | 657 | 1.3880 | 0.3333 |
| 1.3369 | 4.0 | 876 | 1.3811 | 0.3333 |
| 1.3406 | 5.0 | 1095 | 1.3747 | 0.3333 |
| 1.3171 | 6.0 | 1314 | 1.3686 | 0.3333 |
| 1.2982 | 7.0 | 1533 | 1.3628 | 0.3571 |
| 1.2896 | 8.0 | 1752 | 1.3571 | 0.3571 |
| 1.2549 | 9.0 | 1971 | 1.3513 | 0.3810 |
| 1.2384 | 10.0 | 2190 | 1.3457 | 0.4048 |
| 1.2507 | 11.0 | 2409 | 1.3401 | 0.4286 |
| 1.2362 | 12.0 | 2628 | 1.3346 | 0.4286 |
| 1.1966 | 13.0 | 2847 | 1.3293 | 0.4286 |
| 1.2279 | 14.0 | 3066 | 1.3240 | 0.4286 |
| 1.2136 | 15.0 | 3285 | 1.3188 | 0.4286 |
| 1.1856 | 16.0 | 3504 | 1.3138 | 0.4286 |
| 1.1941 | 17.0 | 3723 | 1.3088 | 0.4286 |
| 1.1805 | 18.0 | 3942 | 1.3039 | 0.4286 |
| 1.1554 | 19.0 | 4161 | 1.2991 | 0.4048 |
| 1.1709 | 20.0 | 4380 | 1.2943 | 0.4048 |
| 1.1523 | 21.0 | 4599 | 1.2895 | 0.4048 |
| 1.138 | 22.0 | 4818 | 1.2848 | 0.4048 |
| 1.0984 | 23.0 | 5037 | 1.2803 | 0.4048 |
| 1.1405 | 24.0 | 5256 | 1.2759 | 0.4048 |
| 1.1028 | 25.0 | 5475 | 1.2716 | 0.4286 |
| 1.1236 | 26.0 | 5694 | 1.2674 | 0.4286 |
| 1.0819 | 27.0 | 5913 | 1.2634 | 0.4286 |
| 1.1245 | 28.0 | 6132 | 1.2595 | 0.4286 |
| 1.0929 | 29.0 | 6351 | 1.2557 | 0.4286 |
| 1.0861 | 30.0 | 6570 | 1.2521 | 0.4048 |
| 1.082 | 31.0 | 6789 | 1.2486 | 0.4048 |
| 1.0826 | 32.0 | 7008 | 1.2452 | 0.4048 |
| 1.0889 | 33.0 | 7227 | 1.2420 | 0.4048 |
| 1.052 | 34.0 | 7446 | 1.2390 | 0.4286 |
| 1.056 | 35.0 | 7665 | 1.2361 | 0.4286 |
| 1.0391 | 36.0 | 7884 | 1.2333 | 0.4286 |
| 1.0236 | 37.0 | 8103 | 1.2307 | 0.4286 |
| 1.0474 | 38.0 | 8322 | 1.2283 | 0.4286 |
| 1.0069 | 39.0 | 8541 | 1.2261 | 0.4286 |
| 1.0443 | 40.0 | 8760 | 1.2242 | 0.4286 |
| 1.0711 | 41.0 | 8979 | 1.2223 | 0.4048 |
| 1.053 | 42.0 | 9198 | 1.2207 | 0.4286 |
| 1.0356 | 43.0 | 9417 | 1.2193 | 0.4286 |
| 1.0491 | 44.0 | 9636 | 1.2181 | 0.4286 |
| 0.9928 | 45.0 | 9855 | 1.2171 | 0.4286 |
| 1.0402 | 46.0 | 10074 | 1.2163 | 0.4286 |
| 1.0792 | 47.0 | 10293 | 1.2157 | 0.4286 |
| 1.0146 | 48.0 | 10512 | 1.2153 | 0.4286 |
| 1.0325 | 49.0 | 10731 | 1.2152 | 0.4286 |
| 1.0249 | 50.0 | 10950 | 1.2151 | 0.4286 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.