pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | transformers |
# DavidAU/Kunoichi-DPO-v2-Instruct-32k-7B-Q6_K-GGUF
This model was converted to GGUF format from [`mpasila/Kunoichi-DPO-v2-Instruct-32k-7B`](https://huggingface.co/mpasila/Kunoichi-DPO-v2-Instruct-32k-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mpasila/Kunoichi-DPO-v2-Instruct-32k-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Kunoichi-DPO-v2-Instruct-32k-7B-Q6_K-GGUF --model kunoichi-dpo-v2-instruct-32k-7b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Kunoichi-DPO-v2-Instruct-32k-7B-Q6_K-GGUF --model kunoichi-dpo-v2-instruct-32k-7b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m kunoichi-dpo-v2-instruct-32k-7b.Q6_K.gguf -n 128
```
| {"library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["alpindale/Mistral-7B-v0.2-hf", "mistralai/Mistral-7B-Instruct-v0.2", "SanjiWatsuki/Kunoichi-DPO-v2-7B"]} | DavidAU/Kunoichi-DPO-v2-Instruct-32k-7B-Q6_K-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:alpindale/Mistral-7B-v0.2-hf",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T03:49:08+00:00 | [] | [] | TAGS
#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-alpindale/Mistral-7B-v0.2-hf #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #endpoints_compatible #region-us
|
# DavidAU/Kunoichi-DPO-v2-Instruct-32k-7B-Q6_K-GGUF
This model was converted to GGUF format from 'mpasila/Kunoichi-DPO-v2-Instruct-32k-7B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Kunoichi-DPO-v2-Instruct-32k-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'mpasila/Kunoichi-DPO-v2-Instruct-32k-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-alpindale/Mistral-7B-v0.2-hf #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #endpoints_compatible #region-us \n",
"# DavidAU/Kunoichi-DPO-v2-Instruct-32k-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'mpasila/Kunoichi-DPO-v2-Instruct-32k-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_8192_512_17M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_17M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_17M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5544
- F1 Score: 0.7019
- Accuracy: 0.707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6055 | 14.29 | 200 | 0.5542 | 0.7044 | 0.708 |
| 0.5706 | 28.57 | 400 | 0.5378 | 0.7185 | 0.72 |
| 0.5569 | 42.86 | 600 | 0.5314 | 0.7174 | 0.718 |
| 0.543 | 57.14 | 800 | 0.5287 | 0.7110 | 0.711 |
| 0.5311 | 71.43 | 1000 | 0.5236 | 0.7293 | 0.732 |
| 0.5191 | 85.71 | 1200 | 0.5231 | 0.7250 | 0.725 |
| 0.508 | 100.0 | 1400 | 0.5307 | 0.7211 | 0.721 |
| 0.4991 | 114.29 | 1600 | 0.5402 | 0.7206 | 0.722 |
| 0.4876 | 128.57 | 1800 | 0.5414 | 0.7125 | 0.713 |
| 0.477 | 142.86 | 2000 | 0.5550 | 0.7171 | 0.717 |
| 0.4685 | 157.14 | 2200 | 0.5542 | 0.7228 | 0.723 |
| 0.4578 | 171.43 | 2400 | 0.5797 | 0.7270 | 0.727 |
| 0.4489 | 185.71 | 2600 | 0.5800 | 0.7060 | 0.706 |
| 0.4391 | 200.0 | 2800 | 0.5882 | 0.7110 | 0.711 |
| 0.4298 | 214.29 | 3000 | 0.5983 | 0.7071 | 0.707 |
| 0.419 | 228.57 | 3200 | 0.6226 | 0.7070 | 0.707 |
| 0.412 | 242.86 | 3400 | 0.6148 | 0.7079 | 0.708 |
| 0.4035 | 257.14 | 3600 | 0.6265 | 0.6961 | 0.696 |
| 0.3953 | 271.43 | 3800 | 0.6404 | 0.7000 | 0.7 |
| 0.3865 | 285.71 | 4000 | 0.6663 | 0.6937 | 0.694 |
| 0.3777 | 300.0 | 4200 | 0.6643 | 0.7041 | 0.704 |
| 0.3715 | 314.29 | 4400 | 0.6825 | 0.6991 | 0.699 |
| 0.3641 | 328.57 | 4600 | 0.6910 | 0.7011 | 0.701 |
| 0.3578 | 342.86 | 4800 | 0.7015 | 0.6968 | 0.697 |
| 0.3501 | 357.14 | 5000 | 0.7089 | 0.6991 | 0.699 |
| 0.3445 | 371.43 | 5200 | 0.7226 | 0.6980 | 0.698 |
| 0.339 | 385.71 | 5400 | 0.7392 | 0.6961 | 0.696 |
| 0.3335 | 400.0 | 5600 | 0.7468 | 0.6899 | 0.69 |
| 0.3293 | 414.29 | 5800 | 0.7530 | 0.6846 | 0.685 |
| 0.3247 | 428.57 | 6000 | 0.7637 | 0.6889 | 0.689 |
| 0.3188 | 442.86 | 6200 | 0.7667 | 0.6991 | 0.699 |
| 0.3144 | 457.14 | 6400 | 0.7788 | 0.6960 | 0.696 |
| 0.3107 | 471.43 | 6600 | 0.7932 | 0.6938 | 0.694 |
| 0.3085 | 485.71 | 6800 | 0.7876 | 0.6900 | 0.69 |
| 0.3035 | 500.0 | 7000 | 0.8052 | 0.6858 | 0.686 |
| 0.3007 | 514.29 | 7200 | 0.8075 | 0.6836 | 0.684 |
| 0.2992 | 528.57 | 7400 | 0.8033 | 0.6868 | 0.687 |
| 0.2948 | 542.86 | 7600 | 0.8097 | 0.6889 | 0.689 |
| 0.2926 | 557.14 | 7800 | 0.8071 | 0.6909 | 0.691 |
| 0.29 | 571.43 | 8000 | 0.8222 | 0.6839 | 0.684 |
| 0.2872 | 585.71 | 8200 | 0.8243 | 0.6941 | 0.694 |
| 0.286 | 600.0 | 8400 | 0.8213 | 0.6881 | 0.688 |
| 0.2847 | 614.29 | 8600 | 0.8289 | 0.6869 | 0.687 |
| 0.2835 | 628.57 | 8800 | 0.8307 | 0.6880 | 0.688 |
| 0.2825 | 642.86 | 9000 | 0.8279 | 0.6859 | 0.686 |
| 0.2815 | 657.14 | 9200 | 0.8337 | 0.6850 | 0.685 |
| 0.2789 | 671.43 | 9400 | 0.8416 | 0.69 | 0.69 |
| 0.2791 | 685.71 | 9600 | 0.8393 | 0.6910 | 0.691 |
| 0.2777 | 700.0 | 9800 | 0.8436 | 0.6879 | 0.688 |
| 0.2778 | 714.29 | 10000 | 0.8435 | 0.6889 | 0.689 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_17M", "model-index": [{"name": "GUE_tf_3-seqsight_8192_512_17M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_8192_512_17M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_17M",
"region:us"
] | null | 2024-04-16T03:50:27+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us
| GUE\_tf\_3-seqsight\_8192\_512\_17M-L32\_all
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_17M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5544
* F1 Score: 0.7019
* Accuracy: 0.707
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ClasificadorMotivoMora-Bert
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4814
- Accuracy: 0.8427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.443 | 1.0 | 845 | 0.5023 | 0.8214 |
| 0.3651 | 2.0 | 1690 | 0.4184 | 0.8504 |
| 0.2535 | 3.0 | 2535 | 0.4814 | 0.8427 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "dccuchile/bert-base-spanish-wwm-uncased", "model-index": [{"name": "ClasificadorMotivoMora-Bert", "results": []}]} | Arodrigo/ClasificadorMotivoMora-Bert | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T03:50:37+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-dccuchile/bert-base-spanish-wwm-uncased #autotrain_compatible #endpoints_compatible #region-us
| ClasificadorMotivoMora-Bert
===========================
This model is a fine-tuned version of dccuchile/bert-base-spanish-wwm-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4814
* Accuracy: 0.8427
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-dccuchile/bert-base-spanish-wwm-uncased #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | null |
# rabimba/Gemma-COT-Q4_K_M-GGUF
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo rabimba/Gemma-COT-Q4_K_M-GGUF --model gemma-cot.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo rabimba/Gemma-COT-Q4_K_M-GGUF --model gemma-cot.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m gemma-cot.Q4_K_M.gguf -n 128
```
| {"tags": ["llama-cpp", "gguf-my-repo"]} | rabimba/Gemma-COT-Q4_K_M-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"region:us"
] | null | 2024-04-16T03:51:13+00:00 | [] | [] | TAGS
#gguf #llama-cpp #gguf-my-repo #region-us
|
# rabimba/Gemma-COT-Q4_K_M-GGUF
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# rabimba/Gemma-COT-Q4_K_M-GGUF",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #region-us \n",
"# rabimba/Gemma-COT-Q4_K_M-GGUF",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-wikisql2
This model is a fine-tuned version of [adityarao1612/mt5-small-finetuned-wikisql](https://huggingface.co/adityarao1612/mt5-small-finetuned-wikisql) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4787
- Bleu: 42.5374
- Gen Len: 16.3262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.6048 | 1.0 | 8097 | 0.5237 | 41.501 | 16.365 |
| 0.5856 | 2.0 | 16194 | 0.4880 | 42.1987 | 16.2478 |
| 0.5583 | 3.0 | 24291 | 0.4787 | 42.5374 | 16.3262 |
### Framework versions
- Transformers 4.26.0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
| {"tags": ["generated_from_trainer"], "metrics": ["bleu"], "model-index": [{"name": "mt5-small-finetuned-wikisql2", "results": []}]} | Akki-off/mt5-small-finetuned-wikisql2 | null | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T03:51:44+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #mt5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| mt5-small-finetuned-wikisql2
============================
This model is a fine-tuned version of adityarao1612/mt5-small-finetuned-wikisql on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4787
* Bleu: 42.5374
* Gen Len: 16.3262
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.26.0
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.13.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.26.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #mt5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.26.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.13.3"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_8192_512_17M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_17M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_17M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4508
- F1 Score: 0.7865
- Accuracy: 0.787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5471 | 20.0 | 200 | 0.5154 | 0.7412 | 0.743 |
| 0.4962 | 40.0 | 400 | 0.5097 | 0.7518 | 0.752 |
| 0.4775 | 60.0 | 600 | 0.5105 | 0.7385 | 0.739 |
| 0.4623 | 80.0 | 800 | 0.5087 | 0.7518 | 0.752 |
| 0.4487 | 100.0 | 1000 | 0.5222 | 0.7450 | 0.745 |
| 0.4353 | 120.0 | 1200 | 0.5298 | 0.7450 | 0.745 |
| 0.4208 | 140.0 | 1400 | 0.5376 | 0.7469 | 0.747 |
| 0.4073 | 160.0 | 1600 | 0.5487 | 0.7410 | 0.741 |
| 0.3906 | 180.0 | 1800 | 0.5553 | 0.7469 | 0.747 |
| 0.3775 | 200.0 | 2000 | 0.5870 | 0.7397 | 0.74 |
| 0.3623 | 220.0 | 2200 | 0.5985 | 0.7429 | 0.743 |
| 0.3484 | 240.0 | 2400 | 0.6139 | 0.7318 | 0.732 |
| 0.3346 | 260.0 | 2600 | 0.6319 | 0.7288 | 0.729 |
| 0.3238 | 280.0 | 2800 | 0.6569 | 0.7267 | 0.727 |
| 0.3117 | 300.0 | 3000 | 0.6674 | 0.7239 | 0.724 |
| 0.3001 | 320.0 | 3200 | 0.6887 | 0.7299 | 0.73 |
| 0.2897 | 340.0 | 3400 | 0.6989 | 0.7259 | 0.726 |
| 0.2787 | 360.0 | 3600 | 0.7202 | 0.7250 | 0.725 |
| 0.2695 | 380.0 | 3800 | 0.7435 | 0.7207 | 0.721 |
| 0.2603 | 400.0 | 4000 | 0.7467 | 0.7270 | 0.727 |
| 0.2515 | 420.0 | 4200 | 0.7857 | 0.7206 | 0.721 |
| 0.2456 | 440.0 | 4400 | 0.7825 | 0.7157 | 0.716 |
| 0.2376 | 460.0 | 4600 | 0.8019 | 0.7207 | 0.721 |
| 0.2289 | 480.0 | 4800 | 0.8152 | 0.7159 | 0.716 |
| 0.2245 | 500.0 | 5000 | 0.8429 | 0.7145 | 0.715 |
| 0.2189 | 520.0 | 5200 | 0.8462 | 0.7148 | 0.715 |
| 0.2119 | 540.0 | 5400 | 0.8669 | 0.7139 | 0.714 |
| 0.206 | 560.0 | 5600 | 0.8800 | 0.7138 | 0.714 |
| 0.2014 | 580.0 | 5800 | 0.8786 | 0.7196 | 0.72 |
| 0.1969 | 600.0 | 6000 | 0.9058 | 0.7249 | 0.725 |
| 0.1913 | 620.0 | 6200 | 0.9001 | 0.7088 | 0.709 |
| 0.1876 | 640.0 | 6400 | 0.9237 | 0.7156 | 0.716 |
| 0.1854 | 660.0 | 6600 | 0.9214 | 0.7199 | 0.72 |
| 0.1821 | 680.0 | 6800 | 0.9227 | 0.7140 | 0.714 |
| 0.1792 | 700.0 | 7000 | 0.9494 | 0.7126 | 0.713 |
| 0.1754 | 720.0 | 7200 | 0.9516 | 0.7116 | 0.712 |
| 0.1727 | 740.0 | 7400 | 0.9490 | 0.7169 | 0.717 |
| 0.1699 | 760.0 | 7600 | 0.9630 | 0.7140 | 0.714 |
| 0.1678 | 780.0 | 7800 | 0.9612 | 0.7209 | 0.721 |
| 0.1638 | 800.0 | 8000 | 0.9844 | 0.7190 | 0.719 |
| 0.1652 | 820.0 | 8200 | 0.9799 | 0.7179 | 0.718 |
| 0.1623 | 840.0 | 8400 | 0.9791 | 0.7130 | 0.713 |
| 0.1596 | 860.0 | 8600 | 0.9917 | 0.7169 | 0.717 |
| 0.1589 | 880.0 | 8800 | 0.9911 | 0.7140 | 0.714 |
| 0.1574 | 900.0 | 9000 | 1.0070 | 0.7160 | 0.716 |
| 0.1573 | 920.0 | 9200 | 0.9988 | 0.7129 | 0.713 |
| 0.1555 | 940.0 | 9400 | 1.0088 | 0.714 | 0.714 |
| 0.1549 | 960.0 | 9600 | 1.0083 | 0.716 | 0.716 |
| 0.1561 | 980.0 | 9800 | 1.0067 | 0.7109 | 0.711 |
| 0.1542 | 1000.0 | 10000 | 1.0103 | 0.7150 | 0.715 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_17M", "model-index": [{"name": "GUE_tf_2-seqsight_8192_512_17M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_8192_512_17M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_17M",
"region:us"
] | null | 2024-04-16T03:54:50+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us
| GUE\_tf\_2-seqsight\_8192\_512\_17M-L32\_all
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_17M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4508
* F1 Score: 0.7865
* Accuracy: 0.787
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# DavidAU/SeaMax-7B-Q6_K-GGUF
This model was converted to GGUF format from [`mpasila/SeaMax-7B`](https://huggingface.co/mpasila/SeaMax-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mpasila/SeaMax-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/SeaMax-7B-Q6_K-GGUF --model seamax-7b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/SeaMax-7B-Q6_K-GGUF --model seamax-7b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m seamax-7b.Q6_K.gguf -n 128
```
| {"library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["mpasila/PIPPA-Named-7B", "Locutusque/SlimHercules-4.0-Mistral-7B-v0.2", "cognitivecomputations/dolphin-2.8-mistral-7b-v02"]} | DavidAU/SeaMax-7B-Q6_K-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:mpasila/PIPPA-Named-7B",
"base_model:Locutusque/SlimHercules-4.0-Mistral-7B-v0.2",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T03:55:07+00:00 | [] | [] | TAGS
#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-mpasila/PIPPA-Named-7B #base_model-Locutusque/SlimHercules-4.0-Mistral-7B-v0.2 #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #endpoints_compatible #region-us
|
# DavidAU/SeaMax-7B-Q6_K-GGUF
This model was converted to GGUF format from 'mpasila/SeaMax-7B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/SeaMax-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'mpasila/SeaMax-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-mpasila/PIPPA-Named-7B #base_model-Locutusque/SlimHercules-4.0-Mistral-7B-v0.2 #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #endpoints_compatible #region-us \n",
"# DavidAU/SeaMax-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'mpasila/SeaMax-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
<p style="font-size:20px;" align="center">
🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News 🔥🔥🔥 [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper.
## Model Details
* **Model name**: WizardLM-2 8x22B
* **Developed by**: WizardLM@Microsoft AI
* **Model type**: Mixture of Experts (MoE)
* **Base model**: [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1)
* **Parameters**: 141B
* **Language(s)**: Multilingual
* **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2)
* **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
* **Paper**: WizardLM-2 (Upcoming)
* **License**: Apache2.0
## Model Capacities
**MT-Bench**
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
**Human Preferences Evaluation**
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
❗<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
| {"license": "apache-2.0"} | KnutJaegersberg/WizardLM-2-8x22B | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T03:56:06+00:00 | [
"2304.12244",
"2306.08568",
"2308.09583"
] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<p style="font-size:20px;" align="center">
<a href="URL target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
<a href="URL target="_blank">HF Repo</a> • <a href="URL target="_blank">Github Repo</a> • <a href="URL target="_blank">Twitter</a> • <a href="URL target="_blank">[WizardLM]</a> • <a href="URL target="_blank">[WizardCoder]</a> • <a href="URL target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
Join our <a href="URL target="_blank">Discord</a>
</p>
## News [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our release blog post and upcoming paper.
## Model Details
* Model name: WizardLM-2 8x22B
* Developed by: WizardLM@Microsoft AI
* Model type: Mixture of Experts (MoE)
* Base model: mistral-community/Mixtral-8x22B-v0.1
* Parameters: 141B
* Language(s): Multilingual
* Blog: Introducing WizardLM-2
* Repository: URL
* Paper: WizardLM-2 (Upcoming)
* License: Apache2.0
## Model Capacities
MT-Bench
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="URL alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Human Preferences Evaluation
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="URL alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.
<p align="center" width="100%">
<a ><img src="URL alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo code on our github.
| [
"## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.",
"## Model Details\n\n* Model name: WizardLM-2 8x22B\n* Developed by: WizardLM@Microsoft AI\n* Model type: Mixture of Experts (MoE)\n* Base model: mistral-community/Mixtral-8x22B-v0.1\n* Parameters: 141B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0",
"## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github."
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.",
"## Model Details\n\n* Model name: WizardLM-2 8x22B\n* Developed by: WizardLM@Microsoft AI\n* Model type: Mixture of Experts (MoE)\n* Base model: mistral-community/Mixtral-8x22B-v0.1\n* Parameters: 141B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0",
"## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github."
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 | {"library_name": "peft", "base_model": "EleutherAI/gpt-j-6B"} | anusmriti298/dolly_6B_298A_model1 | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:EleutherAI/gpt-j-6B",
"region:us"
] | null | 2024-04-16T03:56:07+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-EleutherAI/gpt-j-6B #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.1.dev0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-EleutherAI/gpt-j-6B #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
null | transformers |
# DavidAU/Mistral-7B-Erebus-v3-Instruct-32k-Q6_K-GGUF
This model was converted to GGUF format from [`mpasila/Mistral-7B-Erebus-v3-Instruct-32k`](https://huggingface.co/mpasila/Mistral-7B-Erebus-v3-Instruct-32k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mpasila/Mistral-7B-Erebus-v3-Instruct-32k) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Mistral-7B-Erebus-v3-Instruct-32k-Q6_K-GGUF --model mistral-7b-erebus-v3-instruct-32k.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Mistral-7B-Erebus-v3-Instruct-32k-Q6_K-GGUF --model mistral-7b-erebus-v3-instruct-32k.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-7b-erebus-v3-instruct-32k.Q6_K.gguf -n 128
```
| {"library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["alpindale/Mistral-7B-v0.2-hf", "mistralai/Mistral-7B-Instruct-v0.2", "KoboldAI/Mistral-7B-Erebus-v3"]} | DavidAU/Mistral-7B-Erebus-v3-Instruct-32k-Q6_K-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:alpindale/Mistral-7B-v0.2-hf",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:KoboldAI/Mistral-7B-Erebus-v3",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T03:56:47+00:00 | [] | [] | TAGS
#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-alpindale/Mistral-7B-v0.2-hf #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-KoboldAI/Mistral-7B-Erebus-v3 #endpoints_compatible #region-us
|
# DavidAU/Mistral-7B-Erebus-v3-Instruct-32k-Q6_K-GGUF
This model was converted to GGUF format from 'mpasila/Mistral-7B-Erebus-v3-Instruct-32k' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Mistral-7B-Erebus-v3-Instruct-32k-Q6_K-GGUF\nThis model was converted to GGUF format from 'mpasila/Mistral-7B-Erebus-v3-Instruct-32k' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-alpindale/Mistral-7B-v0.2-hf #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-KoboldAI/Mistral-7B-Erebus-v3 #endpoints_compatible #region-us \n",
"# DavidAU/Mistral-7B-Erebus-v3-Instruct-32k-Q6_K-GGUF\nThis model was converted to GGUF format from 'mpasila/Mistral-7B-Erebus-v3-Instruct-32k' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# DavidAU/Mistral-7B-Holodeck-1-Instruct-32k-Q6_K-GGUF
This model was converted to GGUF format from [`mpasila/Mistral-7B-Holodeck-1-Instruct-32k`](https://huggingface.co/mpasila/Mistral-7B-Holodeck-1-Instruct-32k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mpasila/Mistral-7B-Holodeck-1-Instruct-32k) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Mistral-7B-Holodeck-1-Instruct-32k-Q6_K-GGUF --model mistral-7b-holodeck-1-instruct-32k.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Mistral-7B-Holodeck-1-Instruct-32k-Q6_K-GGUF --model mistral-7b-holodeck-1-instruct-32k.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-7b-holodeck-1-instruct-32k.Q6_K.gguf -n 128
```
| {"library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["alpindale/Mistral-7B-v0.2-hf", "mistralai/Mistral-7B-Instruct-v0.2", "KoboldAI/Mistral-7B-Holodeck-1"]} | DavidAU/Mistral-7B-Holodeck-1-Instruct-32k-Q6_K-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:alpindale/Mistral-7B-v0.2-hf",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:KoboldAI/Mistral-7B-Holodeck-1",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T03:57:50+00:00 | [] | [] | TAGS
#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-alpindale/Mistral-7B-v0.2-hf #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-KoboldAI/Mistral-7B-Holodeck-1 #endpoints_compatible #region-us
|
# DavidAU/Mistral-7B-Holodeck-1-Instruct-32k-Q6_K-GGUF
This model was converted to GGUF format from 'mpasila/Mistral-7B-Holodeck-1-Instruct-32k' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Mistral-7B-Holodeck-1-Instruct-32k-Q6_K-GGUF\nThis model was converted to GGUF format from 'mpasila/Mistral-7B-Holodeck-1-Instruct-32k' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-alpindale/Mistral-7B-v0.2-hf #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-KoboldAI/Mistral-7B-Holodeck-1 #endpoints_compatible #region-us \n",
"# DavidAU/Mistral-7B-Holodeck-1-Instruct-32k-Q6_K-GGUF\nThis model was converted to GGUF format from 'mpasila/Mistral-7B-Holodeck-1-Instruct-32k' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
# karasu-1.1B-slerpx2
karasu-1.1B-slerpx2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [lightblue/karasu-1.1B](https://huggingface.co/lightblue/karasu-1.1B)
* [aipib/karasu-1.1B-slerp_reverse](https://huggingface.co/aipib/karasu-1.1B-slerp_reverse)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: lightblue/karasu-1.1B
layer_range: [0, 22]
- model: aipib/karasu-1.1B-slerp_reverse
layer_range: [0, 22]
merge_method: slerp
base_model: lightblue/karasu-1.1B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "aipib/karasu-1.1B-slerpx2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "lightblue/karasu-1.1B", "aipib/karasu-1.1B-slerp_reverse"], "base_model": ["lightblue/karasu-1.1B", "aipib/karasu-1.1B-slerp_reverse"]} | aipib/karasu-1.1B-slerpx2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"lightblue/karasu-1.1B",
"aipib/karasu-1.1B-slerp_reverse",
"base_model:lightblue/karasu-1.1B",
"base_model:aipib/karasu-1.1B-slerp_reverse",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T03:58:06+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #lightblue/karasu-1.1B #aipib/karasu-1.1B-slerp_reverse #base_model-lightblue/karasu-1.1B #base_model-aipib/karasu-1.1B-slerp_reverse #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# karasu-1.1B-slerpx2
karasu-1.1B-slerpx2 is a merge of the following models using LazyMergekit:
* lightblue/karasu-1.1B
* aipib/karasu-1.1B-slerp_reverse
## Configuration
## Usage
| [
"# karasu-1.1B-slerpx2\n\nkarasu-1.1B-slerpx2 is a merge of the following models using LazyMergekit:\n* lightblue/karasu-1.1B\n* aipib/karasu-1.1B-slerp_reverse",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #lightblue/karasu-1.1B #aipib/karasu-1.1B-slerp_reverse #base_model-lightblue/karasu-1.1B #base_model-aipib/karasu-1.1B-slerp_reverse #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# karasu-1.1B-slerpx2\n\nkarasu-1.1B-slerpx2 is a merge of the following models using LazyMergekit:\n* lightblue/karasu-1.1B\n* aipib/karasu-1.1B-slerp_reverse",
"## Configuration",
"## Usage"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ClasificadorMotivoMora-Roberta
This model is a fine-tuned version of [bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6173
- Accuracy: 0.8154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4499 | 1.0 | 845 | 0.4837 | 0.8475 |
| 0.374 | 2.0 | 1690 | 0.5517 | 0.8101 |
| 0.1918 | 3.0 | 2535 | 0.6173 | 0.8154 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "bertin-project/bertin-roberta-base-spanish", "model-index": [{"name": "ClasificadorMotivoMora-Roberta", "results": []}]} | Arodrigo/ClasificadorMotivoMora-Roberta | null | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:bertin-project/bertin-roberta-base-spanish",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T03:58:43+00:00 | [] | [] | TAGS
#transformers #safetensors #roberta #text-classification #generated_from_trainer #base_model-bertin-project/bertin-roberta-base-spanish #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
| ClasificadorMotivoMora-Roberta
==============================
This model is a fine-tuned version of bertin-project/bertin-roberta-base-spanish on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6173
* Accuracy: 0.8154
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #roberta #text-classification #generated_from_trainer #base_model-bertin-project/bertin-roberta-base-spanish #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": ["trl", "sft"]} | Kash777/mistral_b_finance_finetuned_test | null | [
"transformers",
"safetensors",
"trl",
"sft",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T03:59:30+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #trl #sft #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #trl #sft #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | swj0419/email_STEP0000015 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T04:00:49+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/WesPro/State-of-the-MoE_RP-3x7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/State-of-the-MoE_RP-3x7b-GGUF/resolve/main/State-of-the-MoE_RP-3x7b.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/State-of-the-MoE_RP-3x7b-GGUF/resolve/main/State-of-the-MoE_RP-3x7b.IQ3_XS.gguf) | IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/State-of-the-MoE_RP-3x7b-GGUF/resolve/main/State-of-the-MoE_RP-3x7b.Q3_K_S.gguf) | Q3_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/State-of-the-MoE_RP-3x7b-GGUF/resolve/main/State-of-the-MoE_RP-3x7b.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/State-of-the-MoE_RP-3x7b-GGUF/resolve/main/State-of-the-MoE_RP-3x7b.IQ3_M.gguf) | IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/State-of-the-MoE_RP-3x7b-GGUF/resolve/main/State-of-the-MoE_RP-3x7b.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/State-of-the-MoE_RP-3x7b-GGUF/resolve/main/State-of-the-MoE_RP-3x7b.Q3_K_L.gguf) | Q3_K_L | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/State-of-the-MoE_RP-3x7b-GGUF/resolve/main/State-of-the-MoE_RP-3x7b.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/State-of-the-MoE_RP-3x7b-GGUF/resolve/main/State-of-the-MoE_RP-3x7b.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/State-of-the-MoE_RP-3x7b-GGUF/resolve/main/State-of-the-MoE_RP-3x7b.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/State-of-the-MoE_RP-3x7b-GGUF/resolve/main/State-of-the-MoE_RP-3x7b.Q5_K_S.gguf) | Q5_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/State-of-the-MoE_RP-3x7b-GGUF/resolve/main/State-of-the-MoE_RP-3x7b.Q5_K_M.gguf) | Q5_K_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/State-of-the-MoE_RP-3x7b-GGUF/resolve/main/State-of-the-MoE_RP-3x7b.Q6_K.gguf) | Q6_K | 10.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/State-of-the-MoE_RP-3x7b-GGUF/resolve/main/State-of-the-MoE_RP-3x7b.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "base_model": "WesPro/State-of-the-MoE_RP-3x7b", "quantized_by": "mradermacher"} | mradermacher/State-of-the-MoE_RP-3x7b-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:WesPro/State-of-the-MoE_RP-3x7b",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T04:01:11+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-WesPro/State-of-the-MoE_RP-3x7b #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-WesPro/State-of-the-MoE_RP-3x7b #endpoints_compatible #region-us \n"
] |
null | transformers |
# DavidAU/LemonadeRP-4.5.3-11B-Q6_K-GGUF
This model was converted to GGUF format from [`mpasila/LemonadeRP-4.5.3-11B`](https://huggingface.co/mpasila/LemonadeRP-4.5.3-11B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mpasila/LemonadeRP-4.5.3-11B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/LemonadeRP-4.5.3-11B-Q6_K-GGUF --model lemonaderp-4.5.3-11b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/LemonadeRP-4.5.3-11B-Q6_K-GGUF --model lemonaderp-4.5.3-11b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m lemonaderp-4.5.3-11b.Q6_K.gguf -n 128
```
| {"library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["KatyTheCutie/LemonadeRP-4.5.3"]} | DavidAU/LemonadeRP-4.5.3-11B-Q6_K-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:KatyTheCutie/LemonadeRP-4.5.3",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T04:01:27+00:00 | [] | [] | TAGS
#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-KatyTheCutie/LemonadeRP-4.5.3 #endpoints_compatible #region-us
|
# DavidAU/LemonadeRP-4.5.3-11B-Q6_K-GGUF
This model was converted to GGUF format from 'mpasila/LemonadeRP-4.5.3-11B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/LemonadeRP-4.5.3-11B-Q6_K-GGUF\nThis model was converted to GGUF format from 'mpasila/LemonadeRP-4.5.3-11B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-KatyTheCutie/LemonadeRP-4.5.3 #endpoints_compatible #region-us \n",
"# DavidAU/LemonadeRP-4.5.3-11B-Q6_K-GGUF\nThis model was converted to GGUF format from 'mpasila/LemonadeRP-4.5.3-11B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruBert-base-sberquad-0.005-len_4-filtered-v2
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.005-len_4-filtered-v2", "results": []}]} | Shalazary/ruBert-base-sberquad-0.005-len_4-filtered-v2 | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ai-forever/ruBert-base",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T04:01:53+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
|
# ruBert-base-sberquad-0.005-len_4-filtered-v2
This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# ruBert-base-sberquad-0.005-len_4-filtered-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n",
"# ruBert-base-sberquad-0.005-len_4-filtered-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | swj0419/email_STEP0000012 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T04:03:17+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# NeuralPipe-7B-slerpv2
NeuralPipe-7B-slerpv2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0.5, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [0.5, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "zmzmxz/NeuralPipe-7B-slerpv2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "mlabonne/NeuralHermes-2.5-Mistral-7B"], "base_model": ["mistralai/Mistral-7B-Instruct-v0.2", "mlabonne/NeuralHermes-2.5-Mistral-7B"]} | zmzmxz/NeuralPipe-7B-slerpv2 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T04:04:15+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #mistralai/Mistral-7B-Instruct-v0.2 #mlabonne/NeuralHermes-2.5-Mistral-7B #conversational #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-mlabonne/NeuralHermes-2.5-Mistral-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# NeuralPipe-7B-slerpv2
NeuralPipe-7B-slerpv2 is a merge of the following models using LazyMergekit:
* mistralai/Mistral-7B-Instruct-v0.2
* mlabonne/NeuralHermes-2.5-Mistral-7B
## Configuration
## Usage
| [
"# NeuralPipe-7B-slerpv2\n\nNeuralPipe-7B-slerpv2 is a merge of the following models using LazyMergekit:\n* mistralai/Mistral-7B-Instruct-v0.2\n* mlabonne/NeuralHermes-2.5-Mistral-7B",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #mistralai/Mistral-7B-Instruct-v0.2 #mlabonne/NeuralHermes-2.5-Mistral-7B #conversational #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-mlabonne/NeuralHermes-2.5-Mistral-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# NeuralPipe-7B-slerpv2\n\nNeuralPipe-7B-slerpv2 is a merge of the following models using LazyMergekit:\n* mistralai/Mistral-7B-Instruct-v0.2\n* mlabonne/NeuralHermes-2.5-Mistral-7B",
"## Configuration",
"## Usage"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_tata-seqsight_8192_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9742
- F1 Score: 0.6215
- Accuracy: 0.6215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1536
- eval_batch_size: 1536
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5806 | 50.0 | 200 | 0.8296 | 0.6169 | 0.6166 |
| 0.292 | 100.0 | 400 | 1.1874 | 0.6278 | 0.6330 |
| 0.1745 | 150.0 | 600 | 1.4082 | 0.6251 | 0.6248 |
| 0.119 | 200.0 | 800 | 1.5722 | 0.6233 | 0.6232 |
| 0.096 | 250.0 | 1000 | 1.5964 | 0.6266 | 0.6264 |
| 0.0739 | 300.0 | 1200 | 1.8597 | 0.6332 | 0.6330 |
| 0.0647 | 350.0 | 1400 | 1.8943 | 0.6217 | 0.6215 |
| 0.0522 | 400.0 | 1600 | 1.9803 | 0.6234 | 0.6232 |
| 0.0454 | 450.0 | 1800 | 2.0089 | 0.6299 | 0.6297 |
| 0.0406 | 500.0 | 2000 | 2.2778 | 0.6263 | 0.6264 |
| 0.0383 | 550.0 | 2200 | 2.3043 | 0.6117 | 0.6117 |
| 0.0321 | 600.0 | 2400 | 2.3340 | 0.6207 | 0.6215 |
| 0.0293 | 650.0 | 2600 | 2.5164 | 0.6149 | 0.6150 |
| 0.0282 | 700.0 | 2800 | 2.4871 | 0.6098 | 0.6101 |
| 0.0271 | 750.0 | 3000 | 2.4323 | 0.6168 | 0.6166 |
| 0.0231 | 800.0 | 3200 | 2.7414 | 0.6251 | 0.6248 |
| 0.0226 | 850.0 | 3400 | 2.6069 | 0.6184 | 0.6183 |
| 0.0218 | 900.0 | 3600 | 2.6443 | 0.6150 | 0.6150 |
| 0.0215 | 950.0 | 3800 | 2.4484 | 0.6152 | 0.6150 |
| 0.0207 | 1000.0 | 4000 | 2.5040 | 0.6071 | 0.6069 |
| 0.0185 | 1050.0 | 4200 | 2.7363 | 0.6164 | 0.6166 |
| 0.019 | 1100.0 | 4400 | 2.6854 | 0.6053 | 0.6052 |
| 0.0189 | 1150.0 | 4600 | 2.9222 | 0.6229 | 0.6232 |
| 0.0191 | 1200.0 | 4800 | 2.6541 | 0.6179 | 0.6183 |
| 0.017 | 1250.0 | 5000 | 2.7796 | 0.6156 | 0.6166 |
| 0.0165 | 1300.0 | 5200 | 2.8497 | 0.6177 | 0.6183 |
| 0.0173 | 1350.0 | 5400 | 3.0037 | 0.6266 | 0.6264 |
| 0.0159 | 1400.0 | 5600 | 2.5643 | 0.6185 | 0.6183 |
| 0.0153 | 1450.0 | 5800 | 2.6406 | 0.6025 | 0.6036 |
| 0.0149 | 1500.0 | 6000 | 2.6752 | 0.6082 | 0.6085 |
| 0.0141 | 1550.0 | 6200 | 2.7922 | 0.6103 | 0.6101 |
| 0.0146 | 1600.0 | 6400 | 3.0695 | 0.6185 | 0.6183 |
| 0.0131 | 1650.0 | 6600 | 2.9847 | 0.6158 | 0.6166 |
| 0.0134 | 1700.0 | 6800 | 2.9478 | 0.6080 | 0.6085 |
| 0.013 | 1750.0 | 7000 | 2.8049 | 0.6202 | 0.6199 |
| 0.0126 | 1800.0 | 7200 | 2.9029 | 0.6134 | 0.6134 |
| 0.0117 | 1850.0 | 7400 | 2.8941 | 0.6168 | 0.6166 |
| 0.0121 | 1900.0 | 7600 | 2.8812 | 0.6103 | 0.6101 |
| 0.0111 | 1950.0 | 7800 | 2.7734 | 0.6234 | 0.6232 |
| 0.0122 | 2000.0 | 8000 | 2.7834 | 0.6199 | 0.6199 |
| 0.0124 | 2050.0 | 8200 | 2.7370 | 0.6133 | 0.6134 |
| 0.0112 | 2100.0 | 8400 | 3.0169 | 0.6120 | 0.6117 |
| 0.0105 | 2150.0 | 8600 | 2.9094 | 0.6118 | 0.6117 |
| 0.0103 | 2200.0 | 8800 | 3.2569 | 0.6094 | 0.6101 |
| 0.0105 | 2250.0 | 9000 | 3.0014 | 0.6033 | 0.6036 |
| 0.0102 | 2300.0 | 9200 | 3.0162 | 0.6136 | 0.6134 |
| 0.0099 | 2350.0 | 9400 | 3.0850 | 0.6001 | 0.6003 |
| 0.0106 | 2400.0 | 9600 | 3.0642 | 0.6033 | 0.6036 |
| 0.0097 | 2450.0 | 9800 | 3.0301 | 0.6053 | 0.6052 |
| 0.0097 | 2500.0 | 10000 | 3.0508 | 0.6119 | 0.6117 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_8192_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_8192_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-16T04:05:03+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_300\_tata-seqsight\_8192\_512\_30M-L32\_all
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9742
* F1 Score: 0.6215
* Accuracy: 0.6215
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 1536
* eval\_batch\_size: 1536
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_notata-seqsight_8192_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5592
- F1 Score: 0.8590
- Accuracy: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5391 | 9.52 | 200 | 0.4358 | 0.7941 | 0.7944 |
| 0.4157 | 19.05 | 400 | 0.4056 | 0.8179 | 0.8182 |
| 0.3638 | 28.57 | 600 | 0.3718 | 0.8398 | 0.8398 |
| 0.3082 | 38.1 | 800 | 0.3456 | 0.8545 | 0.8545 |
| 0.2715 | 47.62 | 1000 | 0.3558 | 0.8537 | 0.8538 |
| 0.2432 | 57.14 | 1200 | 0.3610 | 0.8575 | 0.8575 |
| 0.2201 | 66.67 | 1400 | 0.3544 | 0.8643 | 0.8643 |
| 0.2034 | 76.19 | 1600 | 0.3628 | 0.8641 | 0.8641 |
| 0.1863 | 85.71 | 1800 | 0.3711 | 0.8685 | 0.8685 |
| 0.174 | 95.24 | 2000 | 0.3924 | 0.8686 | 0.8687 |
| 0.1644 | 104.76 | 2200 | 0.3819 | 0.8702 | 0.8702 |
| 0.1552 | 114.29 | 2400 | 0.4070 | 0.8704 | 0.8704 |
| 0.1483 | 123.81 | 2600 | 0.4077 | 0.8667 | 0.8668 |
| 0.1414 | 133.33 | 2800 | 0.4371 | 0.8634 | 0.8636 |
| 0.1357 | 142.86 | 3000 | 0.4499 | 0.8634 | 0.8636 |
| 0.1298 | 152.38 | 3200 | 0.4516 | 0.8617 | 0.8619 |
| 0.1258 | 161.9 | 3400 | 0.4447 | 0.8673 | 0.8673 |
| 0.1213 | 171.43 | 3600 | 0.4760 | 0.8664 | 0.8666 |
| 0.1182 | 180.95 | 3800 | 0.4613 | 0.8663 | 0.8664 |
| 0.1137 | 190.48 | 4000 | 0.4876 | 0.8602 | 0.8604 |
| 0.1105 | 200.0 | 4200 | 0.4477 | 0.8709 | 0.8709 |
| 0.108 | 209.52 | 4400 | 0.4785 | 0.8619 | 0.8621 |
| 0.1047 | 219.05 | 4600 | 0.4708 | 0.8688 | 0.8689 |
| 0.1023 | 228.57 | 4800 | 0.4867 | 0.8643 | 0.8645 |
| 0.101 | 238.1 | 5000 | 0.5109 | 0.8603 | 0.8606 |
| 0.099 | 247.62 | 5200 | 0.4963 | 0.8659 | 0.8660 |
| 0.0961 | 257.14 | 5400 | 0.4664 | 0.8730 | 0.8730 |
| 0.0938 | 266.67 | 5600 | 0.4873 | 0.8718 | 0.8719 |
| 0.0918 | 276.19 | 5800 | 0.5228 | 0.8663 | 0.8664 |
| 0.09 | 285.71 | 6000 | 0.5103 | 0.8676 | 0.8677 |
| 0.0898 | 295.24 | 6200 | 0.5046 | 0.8699 | 0.8700 |
| 0.0878 | 304.76 | 6400 | 0.4988 | 0.8717 | 0.8717 |
| 0.0858 | 314.29 | 6600 | 0.5031 | 0.8667 | 0.8668 |
| 0.0849 | 323.81 | 6800 | 0.4912 | 0.8732 | 0.8732 |
| 0.0842 | 333.33 | 7000 | 0.5120 | 0.8716 | 0.8717 |
| 0.0824 | 342.86 | 7200 | 0.5455 | 0.8597 | 0.8600 |
| 0.0821 | 352.38 | 7400 | 0.5090 | 0.8709 | 0.8709 |
| 0.0811 | 361.9 | 7600 | 0.5144 | 0.8703 | 0.8704 |
| 0.0796 | 371.43 | 7800 | 0.5141 | 0.8734 | 0.8734 |
| 0.0798 | 380.95 | 8000 | 0.5040 | 0.8692 | 0.8692 |
| 0.078 | 390.48 | 8200 | 0.5198 | 0.8741 | 0.8741 |
| 0.077 | 400.0 | 8400 | 0.5083 | 0.8730 | 0.8730 |
| 0.0767 | 409.52 | 8600 | 0.5393 | 0.8694 | 0.8694 |
| 0.0763 | 419.05 | 8800 | 0.5324 | 0.8680 | 0.8681 |
| 0.0749 | 428.57 | 9000 | 0.5362 | 0.8694 | 0.8694 |
| 0.0749 | 438.1 | 9200 | 0.5346 | 0.8703 | 0.8704 |
| 0.0747 | 447.62 | 9400 | 0.5280 | 0.8709 | 0.8709 |
| 0.0743 | 457.14 | 9600 | 0.5311 | 0.8716 | 0.8717 |
| 0.0746 | 466.67 | 9800 | 0.5326 | 0.8695 | 0.8696 |
| 0.0745 | 476.19 | 10000 | 0.5315 | 0.8707 | 0.8707 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_8192_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_8192_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-16T04:08:10+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_300\_notata-seqsight\_8192\_512\_30M-L32\_all
==============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5592
* F1 Score: 0.8590
* Accuracy: 0.8591
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
feature-extraction | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | AlanYR/FN_sentiment_tuned_kakao_model | null | [
"transformers",
"safetensors",
"deberta-v2",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T04:10:53+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #deberta-v2 #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #deberta-v2 #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # ResplendentAI/Aura_v2_7B AWQ
- Model creator: [ResplendentAI](https://huggingface.co/ResplendentAI)
- Original model: [Aura_v2_7B](https://huggingface.co/ResplendentAI/Aura_v2_7B)

## Model Summary
The second version of the Aura line is a direct improvement over the original. Expect poetic and eloquent outputs with real emotion behind them.
I recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise.
If you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs.
This model responds best to ChatML for multiturn conversations.
This model, like all other Mistral based models, is compatible with a Mistral compatible mmproj file for multimodal vision capabilities in KoboldCPP.
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/Aura_v2_7B-AWQ"
system_message = "You are Aura, incarnated as a powerful AI."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
## Prompt template: ChatML
```plaintext
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mistral", "4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "chatml"], "base_model": ["ResplendentAI/Paradigm_7B", "jeiku/Theory_of_Mind_Mistral", "ResplendentAI/Paradigm_7B", "jeiku/selfbot_256_mistral", "ResplendentAI/Paradigm_7B", "jeiku/Gnosis_Reformatted_Mistral", "ResplendentAI/Paradigm_7B"], "pipeline_tag": "text-generation", "inference": false, "prompt_template": "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n", "quantized_by": "Suparious"} | solidrust/Aura_v2_7B-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"chatml",
"en",
"base_model:ResplendentAI/Paradigm_7B",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T04:13:02+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #chatml #en #base_model-ResplendentAI/Paradigm_7B #license-apache-2.0 #text-generation-inference #region-us
| # ResplendentAI/Aura_v2_7B AWQ
- Model creator: ResplendentAI
- Original model: Aura_v2_7B
!image/png
## Model Summary
The second version of the Aura line is a direct improvement over the original. Expect poetic and eloquent outputs with real emotion behind them.
I recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise.
If you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs.
This model responds best to ChatML for multiturn conversations.
This model, like all other Mistral based models, is compatible with a Mistral compatible mmproj file for multimodal vision capabilities in KoboldCPP.
## How to use
### Install the necessary packages
### Example Python code
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- Text Generation Webui - using Loader: AutoAWQ
- vLLM - version 0.2.2 or later for support for all model types.
- Hugging Face Text Generation Inference (TGI)
- Transformers version 4.35.0 and later, from any code or client that supports Transformers
- AutoAWQ - for use from Python code
## Prompt template: ChatML
| [
"# ResplendentAI/Aura_v2_7B AWQ\n\n- Model creator: ResplendentAI\n- Original model: Aura_v2_7B\n\n!image/png",
"## Model Summary\n\nThe second version of the Aura line is a direct improvement over the original. Expect poetic and eloquent outputs with real emotion behind them.\n\nI recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise.\n\nIf you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs.\n\nThis model responds best to ChatML for multiturn conversations.\n\nThis model, like all other Mistral based models, is compatible with a Mistral compatible mmproj file for multimodal vision capabilities in KoboldCPP.",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code",
"## Prompt template: ChatML"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #chatml #en #base_model-ResplendentAI/Paradigm_7B #license-apache-2.0 #text-generation-inference #region-us \n",
"# ResplendentAI/Aura_v2_7B AWQ\n\n- Model creator: ResplendentAI\n- Original model: Aura_v2_7B\n\n!image/png",
"## Model Summary\n\nThe second version of the Aura line is a direct improvement over the original. Expect poetic and eloquent outputs with real emotion behind them.\n\nI recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise.\n\nIf you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs.\n\nThis model responds best to ChatML for multiturn conversations.\n\nThis model, like all other Mistral based models, is compatible with a Mistral compatible mmproj file for multimodal vision capabilities in KoboldCPP.",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code",
"## Prompt template: ChatML"
] |
null | null |
# DavidAU/SthenoWriter-L2-13B-Q6_K-GGUF
This model was converted to GGUF format from [`Sao10K/SthenoWriter-L2-13B`](https://huggingface.co/Sao10K/SthenoWriter-L2-13B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sao10K/SthenoWriter-L2-13B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/SthenoWriter-L2-13B-Q6_K-GGUF --model sthenowriter-l2-13b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/SthenoWriter-L2-13B-Q6_K-GGUF --model sthenowriter-l2-13b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m sthenowriter-l2-13b.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "llama2", "tags": ["llama-cpp", "gguf-my-repo"]} | DavidAU/SthenoWriter-L2-13B-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"license:llama2",
"region:us"
] | null | 2024-04-16T04:14:07+00:00 | [] | [
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #en #license-llama2 #region-us
|
# DavidAU/SthenoWriter-L2-13B-Q6_K-GGUF
This model was converted to GGUF format from 'Sao10K/SthenoWriter-L2-13B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/SthenoWriter-L2-13B-Q6_K-GGUF\nThis model was converted to GGUF format from 'Sao10K/SthenoWriter-L2-13B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #en #license-llama2 #region-us \n",
"# DavidAU/SthenoWriter-L2-13B-Q6_K-GGUF\nThis model was converted to GGUF format from 'Sao10K/SthenoWriter-L2-13B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-sidnarsipur/controlnet_rough
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
You can find some example images below.
prompt: Roughness Map

prompt: Roughness Map

prompt: Roughness Map

prompt: Roughness Map

## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "controlnet", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "controlnet", "diffusers-training"], "base_model": "stabilityai/stable-diffusion-2-1-base", "inference": true} | sidnarsipur/controlnet_rough | null | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-04-16T04:15:42+00:00 | [] | [] | TAGS
#diffusers #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #controlnet #diffusers-training #base_model-stabilityai/stable-diffusion-2-1-base #license-creativeml-openrail-m #region-us
|
# controlnet-sidnarsipur/controlnet_rough
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
You can find some example images below.
prompt: Roughness Map
!images_0)
prompt: Roughness Map
!images_1)
prompt: Roughness Map
!images_2)
prompt: Roughness Map
!images_3)
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# controlnet-sidnarsipur/controlnet_rough\n\nThese are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.\nYou can find some example images below.\n\nprompt: Roughness Map\n!images_0)\nprompt: Roughness Map\n!images_1)\nprompt: Roughness Map\n!images_2)\nprompt: Roughness Map\n!images_3)",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #controlnet #diffusers-training #base_model-stabilityai/stable-diffusion-2-1-base #license-creativeml-openrail-m #region-us \n",
"# controlnet-sidnarsipur/controlnet_rough\n\nThese are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.\nYou can find some example images below.\n\nprompt: Roughness Map\n!images_0)\nprompt: Roughness Map\n!images_1)\nprompt: Roughness Map\n!images_2)\nprompt: Roughness Map\n!images_3)",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
null | null |
# DavidAU/SthenoWriter2.1-L2-13B-Q6_K-GGUF
This model was converted to GGUF format from [`Sao10K/SthenoWriter2.1-L2-13B`](https://huggingface.co/Sao10K/SthenoWriter2.1-L2-13B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sao10K/SthenoWriter2.1-L2-13B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/SthenoWriter2.1-L2-13B-Q6_K-GGUF --model sthenowriter2.1-l2-13b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/SthenoWriter2.1-L2-13B-Q6_K-GGUF --model sthenowriter2.1-l2-13b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m sthenowriter2.1-l2-13b.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "llama2", "tags": ["llama-cpp", "gguf-my-repo"]} | DavidAU/SthenoWriter2.1-L2-13B-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"license:llama2",
"region:us"
] | null | 2024-04-16T04:15:46+00:00 | [] | [
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #en #license-llama2 #region-us
|
# DavidAU/SthenoWriter2.1-L2-13B-Q6_K-GGUF
This model was converted to GGUF format from 'Sao10K/SthenoWriter2.1-L2-13B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/SthenoWriter2.1-L2-13B-Q6_K-GGUF\nThis model was converted to GGUF format from 'Sao10K/SthenoWriter2.1-L2-13B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #en #license-llama2 #region-us \n",
"# DavidAU/SthenoWriter2.1-L2-13B-Q6_K-GGUF\nThis model was converted to GGUF format from 'Sao10K/SthenoWriter2.1-L2-13B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-sidnarsipur/controlnet_models
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
You can find some example images below.
prompt: Height Map

prompt: Height Map

prompt: Height Map

prompt: Height Map

## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "controlnet", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "controlnet", "diffusers-training"], "base_model": "stabilityai/stable-diffusion-2-1-base", "inference": true} | sidnarsipur/controlnet_height | null | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-04-16T04:15:54+00:00 | [] | [] | TAGS
#diffusers #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #controlnet #diffusers-training #base_model-stabilityai/stable-diffusion-2-1-base #license-creativeml-openrail-m #region-us
|
# controlnet-sidnarsipur/controlnet_models
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
You can find some example images below.
prompt: Height Map
!images_0)
prompt: Height Map
!images_1)
prompt: Height Map
!images_2)
prompt: Height Map
!images_3)
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# controlnet-sidnarsipur/controlnet_models\n\nThese are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.\nYou can find some example images below.\n\nprompt: Height Map\n!images_0)\nprompt: Height Map\n!images_1)\nprompt: Height Map\n!images_2)\nprompt: Height Map\n!images_3)",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #controlnet #diffusers-training #base_model-stabilityai/stable-diffusion-2-1-base #license-creativeml-openrail-m #region-us \n",
"# controlnet-sidnarsipur/controlnet_models\n\nThese are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.\nYou can find some example images below.\n\nprompt: Height Map\n!images_0)\nprompt: Height Map\n!images_1)\nprompt: Height Map\n!images_2)\nprompt: Height Map\n!images_3)",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
reinforcement-learning | ml-agents |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: keshav-kumar/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]} | keshav-kumar/ppo-Huggy | null | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | null | 2024-04-16T04:16:07+00:00 | [] | [] | TAGS
#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
|
# ppo Agent playing Huggy
This is a trained model of a ppo agent playing Huggy
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: keshav-kumar/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
| [
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: keshav-kumar/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] | [
"TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n",
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: keshav-kumar/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
null | null |
# DavidAU/Stheno-1.10-L2-13B-Q6_K-GGUF
This model was converted to GGUF format from [`Sao10K/Stheno-1.10-L2-13B`](https://huggingface.co/Sao10K/Stheno-1.10-L2-13B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sao10K/Stheno-1.10-L2-13B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Stheno-1.10-L2-13B-Q6_K-GGUF --model stheno-1.10-l2-13b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Stheno-1.10-L2-13B-Q6_K-GGUF --model stheno-1.10-l2-13b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m stheno-1.10-l2-13b.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "llama2", "tags": ["llama-cpp", "gguf-my-repo"]} | DavidAU/Stheno-1.10-L2-13B-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"license:llama2",
"region:us"
] | null | 2024-04-16T04:17:24+00:00 | [] | [
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #en #license-llama2 #region-us
|
# DavidAU/Stheno-1.10-L2-13B-Q6_K-GGUF
This model was converted to GGUF format from 'Sao10K/Stheno-1.10-L2-13B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Stheno-1.10-L2-13B-Q6_K-GGUF\nThis model was converted to GGUF format from 'Sao10K/Stheno-1.10-L2-13B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #en #license-llama2 #region-us \n",
"# DavidAU/Stheno-1.10-L2-13B-Q6_K-GGUF\nThis model was converted to GGUF format from 'Sao10K/Stheno-1.10-L2-13B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | galbitang/qlora-koalpaca-polyglot-12.8b-50step_user111 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T04:18:55+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# WizardLM-2-8x22B - EXL2 2.75bpw
This is a 2.75bpw EXL2 quant of [microsoft/WizardLM-2-8x22B](https://huggingface.co/microsoft/WizardLM-2-8x22B)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 7.0 | 4.5859 |
| 6.0 | 4.6252 |
| 5.5 | 4.6493 |
| 5.0 | 4.6937 |
| 4.5 | 4.8029 |
| 4.0 | 4.9372 |
| 3.5 | 5.1336 |
| 3.25 | 5.3636 |
| 3.0 | 5.5468 |
| 2.75 | 5.8255 |
| 2.5 | 6.3362 |
| 2.25 | 7.7763 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
DATA_SET=/root/wikitext/wikitext-2-v1.parquet
# Set the model name and bit size
MODEL_NAME="WizardLM-2-8x22B"
BIT_PRECISIONS=(6.0 5.5 5.0 4.5 4.0 3.5 3.25 3.0 2.75 2.5 2.25)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
LOCAL_FOLDER="/root/models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
REMOTE_FOLDER="Dracones/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ ! -d "$LOCAL_FOLDER" ]; then
huggingface-cli download --local-dir-use-symlinks=False --local-dir "${LOCAL_FOLDER}" "${REMOTE_FOLDER}" >> /root/download.log 2>&1
fi
output=$(python test_inference.py -m "$LOCAL_FOLDER" -gs 40,40,40,40 -ed "$DATA_SET")
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
# rm -rf "${LOCAL_FOLDER}"
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="WizardLM-2-8x22B"
# Define variables
MODEL_DIR="/mnt/storage/models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["exl2"], "base_model": "microsoft/WizardLM-2-8x22B"} | Dracones/WizardLM-2-8x22B_exl2_2.75bpw | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"exl2",
"en",
"base_model:microsoft/WizardLM-2-8x22B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T04:21:09+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mixtral #text-generation #exl2 #en #base_model-microsoft/WizardLM-2-8x22B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| WizardLM-2-8x22B - EXL2 2.75bpw
===============================
This is a 2.75bpw EXL2 quant of microsoft/WizardLM-2-8x22B
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #exl2 #en #base_model-microsoft/WizardLM-2-8x22B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | zzttbrdd/sn6_04m | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T04:22:27+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | swj0419/bbc_STEP0000040 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T04:22:31+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_all-seqsight_8192_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6287
- F1 Score: 0.7148
- Accuracy: 0.7152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6368 | 8.33 | 200 | 0.6013 | 0.6759 | 0.6777 |
| 0.5726 | 16.67 | 400 | 0.5802 | 0.6997 | 0.6997 |
| 0.5441 | 25.0 | 600 | 0.5747 | 0.7084 | 0.7084 |
| 0.5211 | 33.33 | 800 | 0.5795 | 0.7160 | 0.7160 |
| 0.5008 | 41.67 | 1000 | 0.6091 | 0.7143 | 0.7145 |
| 0.4862 | 50.0 | 1200 | 0.5893 | 0.7176 | 0.7176 |
| 0.4681 | 58.33 | 1400 | 0.6120 | 0.7112 | 0.7118 |
| 0.4572 | 66.67 | 1600 | 0.5937 | 0.7164 | 0.7166 |
| 0.4442 | 75.0 | 1800 | 0.6070 | 0.7171 | 0.7174 |
| 0.4357 | 83.33 | 2000 | 0.6210 | 0.7237 | 0.7238 |
| 0.4272 | 91.67 | 2200 | 0.6181 | 0.7142 | 0.7152 |
| 0.4186 | 100.0 | 2400 | 0.6099 | 0.7219 | 0.7221 |
| 0.4121 | 108.33 | 2600 | 0.6353 | 0.7242 | 0.7247 |
| 0.4035 | 116.67 | 2800 | 0.6390 | 0.7219 | 0.7221 |
| 0.3975 | 125.0 | 3000 | 0.6477 | 0.7217 | 0.7221 |
| 0.3903 | 133.33 | 3200 | 0.6843 | 0.7197 | 0.7209 |
| 0.3852 | 141.67 | 3400 | 0.6636 | 0.7127 | 0.7147 |
| 0.3772 | 150.0 | 3600 | 0.6409 | 0.7232 | 0.7233 |
| 0.3721 | 158.33 | 3800 | 0.6684 | 0.7078 | 0.7101 |
| 0.3666 | 166.67 | 4000 | 0.6893 | 0.7143 | 0.7162 |
| 0.3596 | 175.0 | 4200 | 0.7089 | 0.7066 | 0.7095 |
| 0.3535 | 183.33 | 4400 | 0.6814 | 0.7116 | 0.7128 |
| 0.3475 | 191.67 | 4600 | 0.6955 | 0.7160 | 0.7171 |
| 0.3442 | 200.0 | 4800 | 0.6923 | 0.7149 | 0.7162 |
| 0.3387 | 208.33 | 5000 | 0.7043 | 0.7157 | 0.7164 |
| 0.3347 | 216.67 | 5200 | 0.7176 | 0.7074 | 0.7095 |
| 0.3302 | 225.0 | 5400 | 0.7191 | 0.7057 | 0.7071 |
| 0.3255 | 233.33 | 5600 | 0.7231 | 0.7069 | 0.7081 |
| 0.3225 | 241.67 | 5800 | 0.7333 | 0.7067 | 0.7083 |
| 0.3178 | 250.0 | 6000 | 0.7050 | 0.7085 | 0.7091 |
| 0.3137 | 258.33 | 6200 | 0.7560 | 0.7043 | 0.7069 |
| 0.3106 | 266.67 | 6400 | 0.7577 | 0.7056 | 0.7074 |
| 0.3065 | 275.0 | 6600 | 0.7488 | 0.7050 | 0.7064 |
| 0.3044 | 283.33 | 6800 | 0.7530 | 0.7057 | 0.7074 |
| 0.3019 | 291.67 | 7000 | 0.7674 | 0.7000 | 0.7022 |
| 0.2981 | 300.0 | 7200 | 0.7888 | 0.7007 | 0.7035 |
| 0.2955 | 308.33 | 7400 | 0.7871 | 0.7077 | 0.7096 |
| 0.2934 | 316.67 | 7600 | 0.7727 | 0.7086 | 0.7098 |
| 0.2883 | 325.0 | 7800 | 0.7854 | 0.7036 | 0.7056 |
| 0.2878 | 333.33 | 8000 | 0.7634 | 0.7065 | 0.7078 |
| 0.2869 | 341.67 | 8200 | 0.7602 | 0.7070 | 0.7083 |
| 0.2844 | 350.0 | 8400 | 0.7759 | 0.7059 | 0.7073 |
| 0.2831 | 358.33 | 8600 | 0.7756 | 0.7062 | 0.7073 |
| 0.2806 | 366.67 | 8800 | 0.7803 | 0.7056 | 0.7069 |
| 0.2784 | 375.0 | 9000 | 0.7851 | 0.7050 | 0.7066 |
| 0.2785 | 383.33 | 9200 | 0.7886 | 0.7041 | 0.7056 |
| 0.277 | 391.67 | 9400 | 0.7858 | 0.7036 | 0.7052 |
| 0.2765 | 400.0 | 9600 | 0.7990 | 0.7045 | 0.7063 |
| 0.2748 | 408.33 | 9800 | 0.8015 | 0.7029 | 0.7049 |
| 0.2756 | 416.67 | 10000 | 0.7966 | 0.7028 | 0.7046 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_8192_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_8192_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-16T04:23:23+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_core\_all-seqsight\_8192\_512\_30M-L32\_all
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6287
* F1 Score: 0.7148
* Accuracy: 0.7152
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cdc_influenza_pagasus-x-large
This model is a fine-tuned version of [google/pegasus-x-large](https://huggingface.co/google/pegasus-x-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0621
- Rouge1: 0.6667
- Rouge2: 0.6562
- Rougel: 0.6667
- Rougelsum: 0.6667
- Gen Len: 51.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 1 | 0.3086 | 0.2386 | 0.2184 | 0.2386 | 0.2386 | 176.0 |
| No log | 2.0 | 2 | 0.1118 | 0.6667 | 0.6562 | 0.6667 | 0.6667 | 51.0 |
| No log | 3.0 | 3 | 0.0755 | 0.6667 | 0.6562 | 0.6667 | 0.6667 | 51.0 |
| No log | 4.0 | 4 | 0.0621 | 0.6667 | 0.6562 | 0.6667 | 0.6667 | 51.0 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "google/pegasus-x-large", "model-index": [{"name": "cdc_influenza_pagasus-x-large", "results": []}]} | PergaZuZ/cdc_influenza_pagasus-x-large | null | [
"transformers",
"tensorboard",
"safetensors",
"pegasus_x",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-x-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T04:24:21+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #pegasus_x #text2text-generation #generated_from_trainer #base_model-google/pegasus-x-large #autotrain_compatible #endpoints_compatible #region-us
| cdc\_influenza\_pagasus-x-large
===============================
This model is a fine-tuned version of google/pegasus-x-large on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0621
* Rouge1: 0.6667
* Rouge2: 0.6562
* Rougel: 0.6667
* Rougelsum: 0.6667
* Gen Len: 51.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #pegasus_x #text2text-generation #generated_from_trainer #base_model-google/pegasus-x-large #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | GalaganKV/Mistral-7B-Instruct-v0.2-MultiTask-v4 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T04:24:28+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-malayalam_mixeddataset_two.0
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1425
- Wer: 0.1451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.9341 | 0.24 | 300 | 0.4363 | 0.5138 |
| 0.228 | 0.47 | 600 | 0.3644 | 0.4847 |
| 0.1828 | 0.71 | 900 | 0.2752 | 0.3807 |
| 0.1479 | 0.95 | 1200 | 0.2671 | 0.3583 |
| 0.1213 | 1.19 | 1500 | 0.2291 | 0.2861 |
| 0.1114 | 1.42 | 1800 | 0.2098 | 0.2754 |
| 0.1049 | 1.66 | 2100 | 0.2088 | 0.2832 |
| 0.0962 | 1.9 | 2400 | 0.1789 | 0.2501 |
| 0.0777 | 2.14 | 2700 | 0.1945 | 0.2371 |
| 0.0685 | 2.37 | 3000 | 0.1788 | 0.2433 |
| 0.0663 | 2.61 | 3300 | 0.1707 | 0.2264 |
| 0.0652 | 2.85 | 3600 | 0.1834 | 0.2227 |
| 0.0573 | 3.08 | 3900 | 0.1663 | 0.2065 |
| 0.0445 | 3.32 | 4200 | 0.1479 | 0.1981 |
| 0.0417 | 3.56 | 4500 | 0.1477 | 0.1779 |
| 0.0415 | 3.8 | 4800 | 0.1504 | 0.1774 |
| 0.0368 | 4.03 | 5100 | 0.1407 | 0.1655 |
| 0.0248 | 4.27 | 5400 | 0.1568 | 0.1672 |
| 0.0258 | 4.51 | 5700 | 0.1495 | 0.1582 |
| 0.0227 | 4.74 | 6000 | 0.1460 | 0.1510 |
| 0.0225 | 4.98 | 6300 | 0.1425 | 0.1451 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "facebook/w2v-bert-2.0", "model-index": [{"name": "w2v-bert-2.0-malayalam_mixeddataset_two.0", "results": []}]} | Bajiyo/w2v-bert-2.0-malayalam_mixeddataset_two.0 | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/w2v-bert-2.0",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2024-04-16T04:25:01+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #wav2vec2-bert #automatic-speech-recognition #generated_from_trainer #base_model-facebook/w2v-bert-2.0 #license-mit #endpoints_compatible #has_space #region-us
| w2v-bert-2.0-malayalam\_mixeddataset\_two.0
===========================================
This model is a fine-tuned version of facebook/w2v-bert-2.0 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1425
* Wer: 0.1451
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.1+cu121
* Datasets 2.16.1
* Tokenizers 0.15.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.1+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2-bert #automatic-speech-recognition #generated_from_trainer #base_model-facebook/w2v-bert-2.0 #license-mit #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.1+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_notata-seqsight_8192_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5971
- F1 Score: 0.7263
- Accuracy: 0.7264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6303 | 9.52 | 200 | 0.5712 | 0.7085 | 0.7085 |
| 0.5576 | 19.05 | 400 | 0.5542 | 0.7232 | 0.7232 |
| 0.5229 | 28.57 | 600 | 0.5665 | 0.7278 | 0.7281 |
| 0.4935 | 38.1 | 800 | 0.5640 | 0.7306 | 0.7305 |
| 0.4694 | 47.62 | 1000 | 0.6016 | 0.7241 | 0.7260 |
| 0.4485 | 57.14 | 1200 | 0.5828 | 0.7234 | 0.7243 |
| 0.4311 | 66.67 | 1400 | 0.6120 | 0.7309 | 0.7315 |
| 0.4171 | 76.19 | 1600 | 0.6016 | 0.7392 | 0.7392 |
| 0.4039 | 85.71 | 1800 | 0.6071 | 0.7293 | 0.7307 |
| 0.3939 | 95.24 | 2000 | 0.6216 | 0.7337 | 0.7343 |
| 0.3827 | 104.76 | 2200 | 0.6258 | 0.7337 | 0.7339 |
| 0.3737 | 114.29 | 2400 | 0.6448 | 0.7307 | 0.7322 |
| 0.3649 | 123.81 | 2600 | 0.6501 | 0.7233 | 0.7251 |
| 0.3545 | 133.33 | 2800 | 0.6472 | 0.7253 | 0.7260 |
| 0.3475 | 142.86 | 3000 | 0.6646 | 0.7223 | 0.7239 |
| 0.339 | 152.38 | 3200 | 0.6704 | 0.7260 | 0.7262 |
| 0.3318 | 161.9 | 3400 | 0.6901 | 0.7170 | 0.7192 |
| 0.3244 | 171.43 | 3600 | 0.7158 | 0.7217 | 0.7236 |
| 0.3181 | 180.95 | 3800 | 0.7421 | 0.7155 | 0.7185 |
| 0.311 | 190.48 | 4000 | 0.7391 | 0.7131 | 0.7162 |
| 0.3047 | 200.0 | 4200 | 0.7242 | 0.7163 | 0.7175 |
| 0.2988 | 209.52 | 4400 | 0.7356 | 0.7129 | 0.7147 |
| 0.2914 | 219.05 | 4600 | 0.7492 | 0.7134 | 0.7151 |
| 0.2861 | 228.57 | 4800 | 0.7538 | 0.7110 | 0.7132 |
| 0.2797 | 238.1 | 5000 | 0.7640 | 0.7089 | 0.7115 |
| 0.2766 | 247.62 | 5200 | 0.7636 | 0.7132 | 0.7143 |
| 0.2698 | 257.14 | 5400 | 0.8062 | 0.7127 | 0.7151 |
| 0.265 | 266.67 | 5600 | 0.7951 | 0.7117 | 0.7140 |
| 0.2613 | 276.19 | 5800 | 0.8163 | 0.7081 | 0.7109 |
| 0.2579 | 285.71 | 6000 | 0.7568 | 0.7150 | 0.7157 |
| 0.2517 | 295.24 | 6200 | 0.7864 | 0.7125 | 0.7132 |
| 0.2499 | 304.76 | 6400 | 0.8258 | 0.7077 | 0.7104 |
| 0.2459 | 314.29 | 6600 | 0.8173 | 0.7101 | 0.7123 |
| 0.2412 | 323.81 | 6800 | 0.8133 | 0.7084 | 0.7100 |
| 0.2374 | 333.33 | 7000 | 0.8416 | 0.7083 | 0.7108 |
| 0.235 | 342.86 | 7200 | 0.8388 | 0.7048 | 0.7072 |
| 0.2334 | 352.38 | 7400 | 0.8477 | 0.7116 | 0.7132 |
| 0.2309 | 361.9 | 7600 | 0.8589 | 0.7089 | 0.7106 |
| 0.2291 | 371.43 | 7800 | 0.8328 | 0.7105 | 0.7119 |
| 0.2256 | 380.95 | 8000 | 0.8533 | 0.7062 | 0.7083 |
| 0.2255 | 390.48 | 8200 | 0.8632 | 0.7096 | 0.7117 |
| 0.2221 | 400.0 | 8400 | 0.8709 | 0.7065 | 0.7087 |
| 0.2203 | 409.52 | 8600 | 0.8553 | 0.7028 | 0.7053 |
| 0.2187 | 419.05 | 8800 | 0.8809 | 0.7081 | 0.7100 |
| 0.2174 | 428.57 | 9000 | 0.8646 | 0.7088 | 0.7106 |
| 0.2156 | 438.1 | 9200 | 0.8621 | 0.7107 | 0.7123 |
| 0.2141 | 447.62 | 9400 | 0.8808 | 0.7086 | 0.7104 |
| 0.2148 | 457.14 | 9600 | 0.8872 | 0.7064 | 0.7087 |
| 0.2133 | 466.67 | 9800 | 0.8884 | 0.7080 | 0.7100 |
| 0.213 | 476.19 | 10000 | 0.8793 | 0.7083 | 0.7102 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_8192_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_8192_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-16T04:25:56+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_core\_notata-seqsight\_8192\_512\_30M-L32\_all
===============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5971
* F1 Score: 0.7263
* Accuracy: 0.7264
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/Sao10K/Euryale-Inverted-L2-70B
**No more quants are incoming, as llama.cpp crashes when generating them.**
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-i1-GGUF/resolve/main/Euryale-Inverted-L2-70B.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-i1-GGUF/resolve/main/Euryale-Inverted-L2-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-i1-GGUF/resolve/main/Euryale-Inverted-L2-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-i1-GGUF/resolve/main/Euryale-Inverted-L2-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-i1-GGUF/resolve/main/Euryale-Inverted-L2-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-i1-GGUF/resolve/main/Euryale-Inverted-L2-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-i1-GGUF/resolve/main/Euryale-Inverted-L2-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-i1-GGUF/resolve/main/Euryale-Inverted-L2-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-i1-GGUF/resolve/main/Euryale-Inverted-L2-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-i1-GGUF/resolve/main/Euryale-Inverted-L2-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-i1-GGUF/resolve/main/Euryale-Inverted-L2-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "base_model": "Sao10K/Euryale-Inverted-L2-70B", "no_imatrix": "GGML_ASSERT: llama.cpp/ggml-quants.c:11239: grid_index >= 0", "quantized_by": "mradermacher"} | mradermacher/Euryale-Inverted-L2-70B-i1-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:Sao10K/Euryale-Inverted-L2-70B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T04:27:05+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-Sao10K/Euryale-Inverted-L2-70B #license-cc-by-nc-4.0 #endpoints_compatible #region-us
| About
-----
weighted/imatrix quants of URL
No more quants are incoming, as URL crashes when generating them.
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-Sao10K/Euryale-Inverted-L2-70B #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n"
] |
feature-extraction | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_bge_ver13
This model is a fine-tuned version of [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "BAAI/bge-m3", "model-index": [{"name": "finetuned_bge_ver13", "results": []}]} | comet24082002/finetuned_bge_ver13 | null | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"feature-extraction",
"generated_from_trainer",
"base_model:BAAI/bge-m3",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T04:30:02+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #xlm-roberta #feature-extraction #generated_from_trainer #base_model-BAAI/bge-m3 #license-mit #endpoints_compatible #region-us
|
# finetuned_bge_ver13
This model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# finetuned_bge_ver13\n\nThis model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #feature-extraction #generated_from_trainer #base_model-BAAI/bge-m3 #license-mit #endpoints_compatible #region-us \n",
"# finetuned_bge_ver13\n\nThis model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | simonamdev/openai-whisper-medium-jv-PeftType.LORA | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T04:30:05+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_tata-seqsight_8192_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7090
- F1 Score: 0.6792
- Accuracy: 0.6803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5348 | 66.67 | 200 | 0.6379 | 0.7288 | 0.7292 |
| 0.2491 | 133.33 | 400 | 0.9154 | 0.7077 | 0.7080 |
| 0.1432 | 200.0 | 600 | 1.1114 | 0.7210 | 0.7210 |
| 0.1008 | 266.67 | 800 | 1.2640 | 0.7129 | 0.7129 |
| 0.0788 | 333.33 | 1000 | 1.3217 | 0.7014 | 0.7015 |
| 0.0675 | 400.0 | 1200 | 1.3820 | 0.7019 | 0.7031 |
| 0.0568 | 466.67 | 1400 | 1.4484 | 0.7107 | 0.7113 |
| 0.0489 | 533.33 | 1600 | 1.5371 | 0.7076 | 0.7080 |
| 0.0431 | 600.0 | 1800 | 1.6683 | 0.7111 | 0.7113 |
| 0.0377 | 666.67 | 2000 | 1.7022 | 0.7005 | 0.7015 |
| 0.0346 | 733.33 | 2200 | 1.7622 | 0.7063 | 0.7064 |
| 0.0311 | 800.0 | 2400 | 1.8043 | 0.7079 | 0.7080 |
| 0.0275 | 866.67 | 2600 | 2.0096 | 0.7088 | 0.7096 |
| 0.0256 | 933.33 | 2800 | 1.9949 | 0.7206 | 0.7210 |
| 0.0247 | 1000.0 | 3000 | 1.8230 | 0.7161 | 0.7162 |
| 0.0222 | 1066.67 | 3200 | 2.0215 | 0.7052 | 0.7064 |
| 0.021 | 1133.33 | 3400 | 1.8571 | 0.7123 | 0.7129 |
| 0.0202 | 1200.0 | 3600 | 1.8276 | 0.7089 | 0.7096 |
| 0.0183 | 1266.67 | 3800 | 2.0339 | 0.7075 | 0.7080 |
| 0.0182 | 1333.33 | 4000 | 2.0567 | 0.7137 | 0.7145 |
| 0.0179 | 1400.0 | 4200 | 1.8955 | 0.7106 | 0.7113 |
| 0.0163 | 1466.67 | 4400 | 2.0246 | 0.7076 | 0.7080 |
| 0.0159 | 1533.33 | 4600 | 2.1482 | 0.6968 | 0.6982 |
| 0.0154 | 1600.0 | 4800 | 2.0812 | 0.7126 | 0.7129 |
| 0.0153 | 1666.67 | 5000 | 2.1545 | 0.7091 | 0.7096 |
| 0.0144 | 1733.33 | 5200 | 2.0842 | 0.6900 | 0.6917 |
| 0.0136 | 1800.0 | 5400 | 2.0233 | 0.7193 | 0.7194 |
| 0.0138 | 1866.67 | 5600 | 2.0627 | 0.7092 | 0.7096 |
| 0.0134 | 1933.33 | 5800 | 1.9228 | 0.7028 | 0.7031 |
| 0.0132 | 2000.0 | 6000 | 2.1282 | 0.7082 | 0.7096 |
| 0.0127 | 2066.67 | 6200 | 2.1734 | 0.6968 | 0.6982 |
| 0.012 | 2133.33 | 6400 | 2.0638 | 0.7115 | 0.7129 |
| 0.0119 | 2200.0 | 6600 | 1.9969 | 0.7159 | 0.7162 |
| 0.0119 | 2266.67 | 6800 | 1.9693 | 0.7241 | 0.7243 |
| 0.0116 | 2333.33 | 7000 | 2.0487 | 0.7208 | 0.7210 |
| 0.0114 | 2400.0 | 7200 | 2.0475 | 0.7126 | 0.7129 |
| 0.0108 | 2466.67 | 7400 | 2.1500 | 0.7140 | 0.7145 |
| 0.0105 | 2533.33 | 7600 | 2.1311 | 0.7140 | 0.7145 |
| 0.0104 | 2600.0 | 7800 | 2.0777 | 0.7174 | 0.7178 |
| 0.0103 | 2666.67 | 8000 | 2.0597 | 0.7158 | 0.7162 |
| 0.01 | 2733.33 | 8200 | 2.0190 | 0.7110 | 0.7113 |
| 0.0099 | 2800.0 | 8400 | 2.0275 | 0.7106 | 0.7113 |
| 0.0097 | 2866.67 | 8600 | 2.1698 | 0.7025 | 0.7031 |
| 0.0096 | 2933.33 | 8800 | 2.1909 | 0.7159 | 0.7162 |
| 0.0094 | 3000.0 | 9000 | 2.2460 | 0.7088 | 0.7096 |
| 0.0091 | 3066.67 | 9200 | 2.1258 | 0.7073 | 0.7080 |
| 0.0091 | 3133.33 | 9400 | 2.2269 | 0.7073 | 0.7080 |
| 0.009 | 3200.0 | 9600 | 2.1349 | 0.7072 | 0.7080 |
| 0.009 | 3266.67 | 9800 | 2.1818 | 0.7090 | 0.7096 |
| 0.009 | 3333.33 | 10000 | 2.1705 | 0.7090 | 0.7096 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_8192_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_8192_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-16T04:30:19+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_core\_tata-seqsight\_8192\_512\_30M-L32\_all
=============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7090
* F1 Score: 0.6792
* Accuracy: 0.6803
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet_normal
Estimate normal maps from basecolor maps.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "controlnet", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "controlnet", "diffusers-training"], "datasets": ["gvecchio/MatSynth"], "base_model": "stabilityai/stable-diffusion-2-1-base", "inference": true} | sidnarsipur/controlnet_normal | null | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"dataset:gvecchio/MatSynth",
"base_model:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-04-16T04:33:32+00:00 | [] | [] | TAGS
#diffusers #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #controlnet #diffusers-training #dataset-gvecchio/MatSynth #base_model-stabilityai/stable-diffusion-2-1-base #license-creativeml-openrail-m #region-us
|
# controlnet_normal
Estimate normal maps from basecolor maps.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# controlnet_normal\n\nEstimate normal maps from basecolor maps.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #controlnet #diffusers-training #dataset-gvecchio/MatSynth #base_model-stabilityai/stable-diffusion-2-1-base #license-creativeml-openrail-m #region-us \n",
"# controlnet_normal\n\nEstimate normal maps from basecolor maps.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biomistral-7b-wo-kqa_golden-sft
This model is a fine-tuned version of [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B) on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3819 | 0.96 | 6 | 1.0960 |
| 1.076 | 1.92 | 12 | 0.7884 |
| 0.8222 | 2.88 | 18 | 0.7021 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["HuggingFaceH4/deita-10k-v0-sft"], "base_model": "BioMistral/BioMistral-7B", "model-index": [{"name": "biomistral-7b-wo-kqa_golden-sft", "results": []}]} | Minbyul/biomistral-7b-wo-kqa_golden-sft | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:BioMistral/BioMistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T04:37:46+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-BioMistral/BioMistral-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| biomistral-7b-wo-kqa\_golden-sft
================================
This model is a fine-tuned version of BioMistral/BioMistral-7B on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7021
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.1.2
* Datasets 2.14.6
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-BioMistral/BioMistral-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
feature-extraction | sentence-transformers |
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#frequently-asked-questions>FAQ</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#citation">Citation</a> |
<a href="#license">License</a>
<p>
</h4>
For more details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
If you are looking for a model that supports more languages, longer texts, and other retrieval methods, you can try using [bge-m3](https://huggingface.co/BAAI/bge-m3).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently:
- **Long-Context LLM**: [Activation Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon)
- **Fine-tuning of LM** : [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail)
- **Dense Retrieval**: [BGE-M3](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3), [LLM Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), [BGE Embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding)
- **Reranker Model**: [BGE Reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
- **Benchmark**: [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB)
## News
- 1/30/2024: Release **BGE-M3**, a new member to BGE model series! M3 stands for **M**ulti-linguality (100+ languages), **M**ulti-granularities (input length up to 8192), **M**ulti-Functionality (unification of dense, lexical, multi-vec/colbert retrieval).
It is the first embedding model which supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks.
[Technical Report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) and [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). :fire:
- 1/9/2024: Release [Activation-Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon), an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. [Technical Report](https://arxiv.org/abs/2401.03462) :fire:
- 12/24/2023: Release **LLaRA**, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. [Technical Report](https://arxiv.org/abs/2312.15503) :fire:
- 11/23/2023: Release [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail), a method to maintain general capabilities during fine-tuning by merging multiple language models. [Technical Report](https://arxiv.org/abs/2311.13534) :fire:
- 10/12/2023: Release [LLM-Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Technical Report](https://arxiv.org/pdf/2310.07554.pdf)
- 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) and [massive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released
- 09/12/2023: New models:
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
<details>
<summary>More</summary>
<!-- ### More -->
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
</details>
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval [1] |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
| [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [Inference](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3#usage) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3) | Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens) | |
| [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
[1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
[2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI.
If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models .
## Frequently asked questions
<details>
<summary>1. How to fine-tune bge embedding model?</summary>
<!-- ### How to fine-tune bge embedding model? -->
Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
Some suggestions:
- Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
- If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
- If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
</details>
<details>
<summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
<!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
**Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
**what matters is the relative order of the scores, not the absolute value.**
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
</details>
<details>
<summary>3. When does the query instruction need to be used</summary>
<!-- ### When does the query instruction need to be used -->
For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction.
No instruction only has a slight degradation in retrieval performance compared with using instruction.
So you can generate embedding without instruction in all cases for convenience.
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
In all cases, the documents/passages do not need to add the instruction.
</details>
## Usage
### Usage for Embedding Model
Here are some examples for using `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = FlagModel('BAAI/bge-large-zh-v1.5',
query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
query_instruction="为这个句子生成表示以用于检索相关文章:"
)
model.query_instruction = "为这个句子生成表示以用于检索相关文章:"
```
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
#### Usage of the ONNX files
```python
from optimum.onnxruntime import ORTModelForFeatureExtraction # type: ignore
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-en-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-en-v1.5', revision="refs/pr/13")
model_ort = ORTModelForFeatureExtraction.from_pretrained('BAAI/bge-large-en-v1.5', revision="refs/pr/13",file_name="onnx/model.onnx")
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
model_output_ort = model_ort(**encoded_input)
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# model_output and model_output_ort are identical
```
#### Usage via infinity
Its also possible to deploy the onnx files with the [infinity_emb](https://github.com/michaelfeil/infinity) pip package.
```python
import asyncio
from infinity_emb import AsyncEmbeddingEngine, EngineArgs
sentences = ["Embed this is sentence via Infinity.", "Paris is in France."]
engine = AsyncEmbeddingEngine.from_args(
EngineArgs(model_name_or_path = "BAAI/bge-large-en-v1.5", device="cpu", engine="optimum" # or engine="torch"
))
async def main():
async with engine:
embeddings, usage = await engine.embed(sentences=sentences)
asyncio.run(main())
```
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
Get relevance scores (higher scores indicate more relevance):
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
score = reranker.compute_score(['query', 'passage'])
print(score)
scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
print(scores)
```
#### Using Huggingface transformers
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
| [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
| [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
- **C-MTEB**:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
| [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
| [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
- **Reranking**:
See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
| Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
| multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
| multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
| multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
| m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
| m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
| bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
| bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
## Train
### BAAI Embedding
We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
**You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
## Contact
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge. | {"language": ["en"], "license": "mit", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers", "mteb"], "model-index": [{"name": "bge-base-en-v1.5", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 76.14925373134328}, {"type": "ap", "value": 39.32336517995478}, {"type": "f1", "value": 70.16902252611425}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 93.386825}, {"type": "ap", "value": 90.21276917991995}, {"type": "f1", "value": 93.37741030006174}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 48.846000000000004}, {"type": "f1", "value": 48.14646269778261}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 40.754000000000005}, {"type": "map_at_10", "value": 55.761}, {"type": "map_at_100", "value": 56.330999999999996}, {"type": "map_at_1000", "value": 56.333999999999996}, {"type": "map_at_3", "value": 51.92}, {"type": "map_at_5", "value": 54.010999999999996}, {"type": "mrr_at_1", "value": 41.181}, {"type": "mrr_at_10", "value": 55.967999999999996}, {"type": "mrr_at_100", "value": 56.538}, {"type": "mrr_at_1000", "value": 56.542}, {"type": "mrr_at_3", "value": 51.980000000000004}, {"type": "mrr_at_5", "value": 54.208999999999996}, {"type": "ndcg_at_1", "value": 40.754000000000005}, {"type": "ndcg_at_10", "value": 63.605000000000004}, {"type": "ndcg_at_100", "value": 66.05199999999999}, {"type": "ndcg_at_1000", "value": 66.12}, {"type": "ndcg_at_3", "value": 55.708}, {"type": "ndcg_at_5", "value": 59.452000000000005}, {"type": "precision_at_1", "value": 40.754000000000005}, {"type": "precision_at_10", "value": 8.841000000000001}, {"type": "precision_at_100", "value": 0.991}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 22.238}, {"type": "precision_at_5", "value": 15.149000000000001}, {"type": "recall_at_1", "value": 40.754000000000005}, {"type": "recall_at_10", "value": 88.407}, {"type": "recall_at_100", "value": 99.14699999999999}, {"type": "recall_at_1000", "value": 99.644}, {"type": "recall_at_3", "value": 66.714}, {"type": "recall_at_5", "value": 75.747}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 48.74884539679369}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 42.8075893810716}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 62.128470519187736}, {"type": "mrr", "value": 74.28065778481289}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 89.24629081484655}, {"type": "cos_sim_spearman", "value": 86.93752309911496}, {"type": "euclidean_pearson", "value": 87.58589628573816}, {"type": "euclidean_spearman", "value": 88.05622328825284}, {"type": "manhattan_pearson", "value": 87.5594959805773}, {"type": "manhattan_spearman", "value": 88.19658793233961}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 86.9512987012987}, {"type": "f1", "value": 86.92515357973708}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 39.10263762928872}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 36.69711517426737}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 32.327}, {"type": "map_at_10", "value": 44.099}, {"type": "map_at_100", "value": 45.525}, {"type": "map_at_1000", "value": 45.641999999999996}, {"type": "map_at_3", "value": 40.47}, {"type": "map_at_5", "value": 42.36}, {"type": "mrr_at_1", "value": 39.199}, {"type": "mrr_at_10", "value": 49.651}, {"type": "mrr_at_100", "value": 50.29}, {"type": "mrr_at_1000", "value": 50.329}, {"type": "mrr_at_3", "value": 46.924}, {"type": "mrr_at_5", "value": 48.548}, {"type": "ndcg_at_1", "value": 39.199}, {"type": "ndcg_at_10", "value": 50.773}, {"type": "ndcg_at_100", "value": 55.67999999999999}, {"type": "ndcg_at_1000", "value": 57.495}, {"type": "ndcg_at_3", "value": 45.513999999999996}, {"type": "ndcg_at_5", "value": 47.703}, {"type": "precision_at_1", "value": 39.199}, {"type": "precision_at_10", "value": 9.914000000000001}, {"type": "precision_at_100", "value": 1.5310000000000001}, {"type": "precision_at_1000", "value": 0.198}, {"type": "precision_at_3", "value": 21.984}, {"type": "precision_at_5", "value": 15.737000000000002}, {"type": "recall_at_1", "value": 32.327}, {"type": "recall_at_10", "value": 63.743}, {"type": "recall_at_100", "value": 84.538}, {"type": "recall_at_1000", "value": 96.089}, {"type": "recall_at_3", "value": 48.065000000000005}, {"type": "recall_at_5", "value": 54.519}, {"type": "map_at_1", "value": 32.671}, {"type": "map_at_10", "value": 42.954}, {"type": "map_at_100", "value": 44.151}, {"type": "map_at_1000", "value": 44.287}, {"type": "map_at_3", "value": 39.912}, {"type": "map_at_5", "value": 41.798}, {"type": "mrr_at_1", "value": 41.465}, {"type": "mrr_at_10", "value": 49.351}, {"type": "mrr_at_100", "value": 49.980000000000004}, {"type": "mrr_at_1000", "value": 50.016000000000005}, {"type": "mrr_at_3", "value": 47.144000000000005}, {"type": "mrr_at_5", "value": 48.592999999999996}, {"type": "ndcg_at_1", "value": 41.465}, {"type": "ndcg_at_10", "value": 48.565999999999995}, {"type": "ndcg_at_100", "value": 52.76499999999999}, {"type": "ndcg_at_1000", "value": 54.749}, {"type": "ndcg_at_3", "value": 44.57}, {"type": "ndcg_at_5", "value": 46.759}, {"type": "precision_at_1", "value": 41.465}, {"type": "precision_at_10", "value": 9.107999999999999}, {"type": "precision_at_100", "value": 1.433}, {"type": "precision_at_1000", "value": 0.191}, {"type": "precision_at_3", "value": 21.423000000000002}, {"type": "precision_at_5", "value": 15.414}, {"type": "recall_at_1", "value": 32.671}, {"type": "recall_at_10", "value": 57.738}, {"type": "recall_at_100", "value": 75.86500000000001}, {"type": "recall_at_1000", "value": 88.36}, {"type": "recall_at_3", "value": 45.626}, {"type": "recall_at_5", "value": 51.812000000000005}, {"type": "map_at_1", "value": 41.185}, {"type": "map_at_10", "value": 53.929}, {"type": "map_at_100", "value": 54.92}, {"type": "map_at_1000", "value": 54.967999999999996}, {"type": "map_at_3", "value": 50.70400000000001}, {"type": "map_at_5", "value": 52.673}, {"type": "mrr_at_1", "value": 47.398}, {"type": "mrr_at_10", "value": 57.303000000000004}, {"type": "mrr_at_100", "value": 57.959}, {"type": "mrr_at_1000", "value": 57.985}, {"type": "mrr_at_3", "value": 54.932}, {"type": "mrr_at_5", "value": 56.464999999999996}, {"type": "ndcg_at_1", "value": 47.398}, {"type": "ndcg_at_10", "value": 59.653}, {"type": "ndcg_at_100", "value": 63.627}, {"type": "ndcg_at_1000", "value": 64.596}, {"type": "ndcg_at_3", "value": 54.455}, {"type": "ndcg_at_5", "value": 57.245000000000005}, {"type": "precision_at_1", "value": 47.398}, {"type": "precision_at_10", "value": 9.524000000000001}, {"type": "precision_at_100", "value": 1.243}, {"type": "precision_at_1000", "value": 0.13699999999999998}, {"type": "precision_at_3", "value": 24.389}, {"type": "precision_at_5", "value": 16.752}, {"type": "recall_at_1", "value": 41.185}, {"type": "recall_at_10", "value": 73.193}, {"type": "recall_at_100", "value": 90.357}, {"type": "recall_at_1000", "value": 97.253}, {"type": "recall_at_3", "value": 59.199999999999996}, {"type": "recall_at_5", "value": 66.118}, {"type": "map_at_1", "value": 27.27}, {"type": "map_at_10", "value": 36.223}, {"type": "map_at_100", "value": 37.218}, {"type": "map_at_1000", "value": 37.293}, {"type": "map_at_3", "value": 33.503}, {"type": "map_at_5", "value": 35.097}, {"type": "mrr_at_1", "value": 29.492}, {"type": "mrr_at_10", "value": 38.352000000000004}, {"type": "mrr_at_100", "value": 39.188}, {"type": "mrr_at_1000", "value": 39.247}, {"type": "mrr_at_3", "value": 35.876000000000005}, {"type": "mrr_at_5", "value": 37.401}, {"type": "ndcg_at_1", "value": 29.492}, {"type": "ndcg_at_10", "value": 41.239}, {"type": "ndcg_at_100", "value": 46.066}, {"type": "ndcg_at_1000", "value": 47.992000000000004}, {"type": "ndcg_at_3", "value": 36.11}, {"type": "ndcg_at_5", "value": 38.772}, {"type": "precision_at_1", "value": 29.492}, {"type": "precision_at_10", "value": 6.260000000000001}, {"type": "precision_at_100", "value": 0.914}, {"type": "precision_at_1000", "value": 0.11100000000000002}, {"type": "precision_at_3", "value": 15.104000000000001}, {"type": "precision_at_5", "value": 10.644}, {"type": "recall_at_1", "value": 27.27}, {"type": "recall_at_10", "value": 54.589}, {"type": "recall_at_100", "value": 76.70700000000001}, {"type": "recall_at_1000", "value": 91.158}, {"type": "recall_at_3", "value": 40.974}, {"type": "recall_at_5", "value": 47.327000000000005}, {"type": "map_at_1", "value": 17.848}, {"type": "map_at_10", "value": 26.207}, {"type": "map_at_100", "value": 27.478}, {"type": "map_at_1000", "value": 27.602}, {"type": "map_at_3", "value": 23.405}, {"type": "map_at_5", "value": 24.98}, {"type": "mrr_at_1", "value": 21.891}, {"type": "mrr_at_10", "value": 31.041999999999998}, {"type": "mrr_at_100", "value": 32.092}, {"type": "mrr_at_1000", "value": 32.151999999999994}, {"type": "mrr_at_3", "value": 28.358}, {"type": "mrr_at_5", "value": 29.969}, {"type": "ndcg_at_1", "value": 21.891}, {"type": "ndcg_at_10", "value": 31.585}, {"type": "ndcg_at_100", "value": 37.531}, {"type": "ndcg_at_1000", "value": 40.256}, {"type": "ndcg_at_3", "value": 26.508}, {"type": "ndcg_at_5", "value": 28.894}, {"type": "precision_at_1", "value": 21.891}, {"type": "precision_at_10", "value": 5.795999999999999}, {"type": "precision_at_100", "value": 0.9990000000000001}, {"type": "precision_at_1000", "value": 0.13799999999999998}, {"type": "precision_at_3", "value": 12.769}, {"type": "precision_at_5", "value": 9.279}, {"type": "recall_at_1", "value": 17.848}, {"type": "recall_at_10", "value": 43.452}, {"type": "recall_at_100", "value": 69.216}, {"type": "recall_at_1000", "value": 88.102}, {"type": "recall_at_3", "value": 29.18}, {"type": "recall_at_5", "value": 35.347}, {"type": "map_at_1", "value": 30.94}, {"type": "map_at_10", "value": 41.248000000000005}, {"type": "map_at_100", "value": 42.495}, {"type": "map_at_1000", "value": 42.602000000000004}, {"type": "map_at_3", "value": 37.939}, {"type": "map_at_5", "value": 39.924}, {"type": "mrr_at_1", "value": 37.824999999999996}, {"type": "mrr_at_10", "value": 47.041}, {"type": "mrr_at_100", "value": 47.83}, {"type": "mrr_at_1000", "value": 47.878}, {"type": "mrr_at_3", "value": 44.466}, {"type": "mrr_at_5", "value": 46.111999999999995}, {"type": "ndcg_at_1", "value": 37.824999999999996}, {"type": "ndcg_at_10", "value": 47.223}, {"type": "ndcg_at_100", "value": 52.394}, {"type": "ndcg_at_1000", "value": 54.432}, {"type": "ndcg_at_3", "value": 42.032000000000004}, {"type": "ndcg_at_5", "value": 44.772}, {"type": "precision_at_1", "value": 37.824999999999996}, {"type": "precision_at_10", "value": 8.393}, {"type": "precision_at_100", "value": 1.2890000000000001}, {"type": "precision_at_1000", "value": 0.164}, {"type": "precision_at_3", "value": 19.698}, {"type": "precision_at_5", "value": 14.013}, {"type": "recall_at_1", "value": 30.94}, {"type": "recall_at_10", "value": 59.316}, {"type": "recall_at_100", "value": 80.783}, {"type": "recall_at_1000", "value": 94.15400000000001}, {"type": "recall_at_3", "value": 44.712}, {"type": "recall_at_5", "value": 51.932}, {"type": "map_at_1", "value": 27.104}, {"type": "map_at_10", "value": 36.675999999999995}, {"type": "map_at_100", "value": 38.076}, {"type": "map_at_1000", "value": 38.189}, {"type": "map_at_3", "value": 33.733999999999995}, {"type": "map_at_5", "value": 35.287}, {"type": "mrr_at_1", "value": 33.904}, {"type": "mrr_at_10", "value": 42.55}, {"type": "mrr_at_100", "value": 43.434}, {"type": "mrr_at_1000", "value": 43.494}, {"type": "mrr_at_3", "value": 40.126}, {"type": "mrr_at_5", "value": 41.473}, {"type": "ndcg_at_1", "value": 33.904}, {"type": "ndcg_at_10", "value": 42.414}, {"type": "ndcg_at_100", "value": 48.203}, {"type": "ndcg_at_1000", "value": 50.437}, {"type": "ndcg_at_3", "value": 37.633}, {"type": "ndcg_at_5", "value": 39.67}, {"type": "precision_at_1", "value": 33.904}, {"type": "precision_at_10", "value": 7.82}, {"type": "precision_at_100", "value": 1.2409999999999999}, {"type": "precision_at_1000", "value": 0.159}, {"type": "precision_at_3", "value": 17.884}, {"type": "precision_at_5", "value": 12.648000000000001}, {"type": "recall_at_1", "value": 27.104}, {"type": "recall_at_10", "value": 53.563}, {"type": "recall_at_100", "value": 78.557}, {"type": "recall_at_1000", "value": 93.533}, {"type": "recall_at_3", "value": 39.92}, {"type": "recall_at_5", "value": 45.457}, {"type": "map_at_1", "value": 27.707749999999997}, {"type": "map_at_10", "value": 36.961}, {"type": "map_at_100", "value": 38.158833333333334}, {"type": "map_at_1000", "value": 38.270333333333326}, {"type": "map_at_3", "value": 34.07183333333334}, {"type": "map_at_5", "value": 35.69533333333334}, {"type": "mrr_at_1", "value": 32.81875}, {"type": "mrr_at_10", "value": 41.293}, {"type": "mrr_at_100", "value": 42.116499999999995}, {"type": "mrr_at_1000", "value": 42.170249999999996}, {"type": "mrr_at_3", "value": 38.83983333333333}, {"type": "mrr_at_5", "value": 40.29775}, {"type": "ndcg_at_1", "value": 32.81875}, {"type": "ndcg_at_10", "value": 42.355}, {"type": "ndcg_at_100", "value": 47.41374999999999}, {"type": "ndcg_at_1000", "value": 49.5805}, {"type": "ndcg_at_3", "value": 37.52825}, {"type": "ndcg_at_5", "value": 39.83266666666667}, {"type": "precision_at_1", "value": 32.81875}, {"type": "precision_at_10", "value": 7.382416666666666}, {"type": "precision_at_100", "value": 1.1640833333333334}, {"type": "precision_at_1000", "value": 0.15383333333333335}, {"type": "precision_at_3", "value": 17.134166666666665}, {"type": "precision_at_5", "value": 12.174833333333336}, {"type": "recall_at_1", "value": 27.707749999999997}, {"type": "recall_at_10", "value": 53.945}, {"type": "recall_at_100", "value": 76.191}, {"type": "recall_at_1000", "value": 91.101}, {"type": "recall_at_3", "value": 40.39083333333334}, {"type": "recall_at_5", "value": 46.40083333333333}, {"type": "map_at_1", "value": 26.482}, {"type": "map_at_10", "value": 33.201}, {"type": "map_at_100", "value": 34.107}, {"type": "map_at_1000", "value": 34.197}, {"type": "map_at_3", "value": 31.174000000000003}, {"type": "map_at_5", "value": 32.279}, {"type": "mrr_at_1", "value": 29.908}, {"type": "mrr_at_10", "value": 36.235}, {"type": "mrr_at_100", "value": 37.04}, {"type": "mrr_at_1000", "value": 37.105}, {"type": "mrr_at_3", "value": 34.355999999999995}, {"type": "mrr_at_5", "value": 35.382999999999996}, {"type": "ndcg_at_1", "value": 29.908}, {"type": "ndcg_at_10", "value": 37.325}, {"type": "ndcg_at_100", "value": 41.795}, {"type": "ndcg_at_1000", "value": 44.105}, {"type": "ndcg_at_3", "value": 33.555}, {"type": "ndcg_at_5", "value": 35.266999999999996}, {"type": "precision_at_1", "value": 29.908}, {"type": "precision_at_10", "value": 5.721}, {"type": "precision_at_100", "value": 0.8630000000000001}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 14.008000000000001}, {"type": "precision_at_5", "value": 9.754999999999999}, {"type": "recall_at_1", "value": 26.482}, {"type": "recall_at_10", "value": 47.072}, {"type": "recall_at_100", "value": 67.27}, {"type": "recall_at_1000", "value": 84.371}, {"type": "recall_at_3", "value": 36.65}, {"type": "recall_at_5", "value": 40.774}, {"type": "map_at_1", "value": 18.815}, {"type": "map_at_10", "value": 26.369999999999997}, {"type": "map_at_100", "value": 27.458}, {"type": "map_at_1000", "value": 27.588}, {"type": "map_at_3", "value": 23.990000000000002}, {"type": "map_at_5", "value": 25.345000000000002}, {"type": "mrr_at_1", "value": 22.953000000000003}, {"type": "mrr_at_10", "value": 30.342999999999996}, {"type": "mrr_at_100", "value": 31.241000000000003}, {"type": "mrr_at_1000", "value": 31.319000000000003}, {"type": "mrr_at_3", "value": 28.16}, {"type": "mrr_at_5", "value": 29.406}, {"type": "ndcg_at_1", "value": 22.953000000000003}, {"type": "ndcg_at_10", "value": 31.151}, {"type": "ndcg_at_100", "value": 36.309000000000005}, {"type": "ndcg_at_1000", "value": 39.227000000000004}, {"type": "ndcg_at_3", "value": 26.921}, {"type": "ndcg_at_5", "value": 28.938000000000002}, {"type": "precision_at_1", "value": 22.953000000000003}, {"type": "precision_at_10", "value": 5.602}, {"type": "precision_at_100", "value": 0.9530000000000001}, {"type": "precision_at_1000", "value": 0.13899999999999998}, {"type": "precision_at_3", "value": 12.606}, {"type": "precision_at_5", "value": 9.119}, {"type": "recall_at_1", "value": 18.815}, {"type": "recall_at_10", "value": 41.574}, {"type": "recall_at_100", "value": 64.84400000000001}, {"type": "recall_at_1000", "value": 85.406}, {"type": "recall_at_3", "value": 29.694}, {"type": "recall_at_5", "value": 34.935}, {"type": "map_at_1", "value": 27.840999999999998}, {"type": "map_at_10", "value": 36.797999999999995}, {"type": "map_at_100", "value": 37.993}, {"type": "map_at_1000", "value": 38.086999999999996}, {"type": "map_at_3", "value": 34.050999999999995}, {"type": "map_at_5", "value": 35.379}, {"type": "mrr_at_1", "value": 32.649}, {"type": "mrr_at_10", "value": 41.025}, {"type": "mrr_at_100", "value": 41.878}, {"type": "mrr_at_1000", "value": 41.929}, {"type": "mrr_at_3", "value": 38.573}, {"type": "mrr_at_5", "value": 39.715}, {"type": "ndcg_at_1", "value": 32.649}, {"type": "ndcg_at_10", "value": 42.142}, {"type": "ndcg_at_100", "value": 47.558}, {"type": "ndcg_at_1000", "value": 49.643}, {"type": "ndcg_at_3", "value": 37.12}, {"type": "ndcg_at_5", "value": 38.983000000000004}, {"type": "precision_at_1", "value": 32.649}, {"type": "precision_at_10", "value": 7.08}, {"type": "precision_at_100", "value": 1.1039999999999999}, {"type": "precision_at_1000", "value": 0.13899999999999998}, {"type": "precision_at_3", "value": 16.698}, {"type": "precision_at_5", "value": 11.511000000000001}, {"type": "recall_at_1", "value": 27.840999999999998}, {"type": "recall_at_10", "value": 54.245}, {"type": "recall_at_100", "value": 77.947}, {"type": "recall_at_1000", "value": 92.36999999999999}, {"type": "recall_at_3", "value": 40.146}, {"type": "recall_at_5", "value": 44.951}, {"type": "map_at_1", "value": 26.529000000000003}, {"type": "map_at_10", "value": 35.010000000000005}, {"type": "map_at_100", "value": 36.647}, {"type": "map_at_1000", "value": 36.857}, {"type": "map_at_3", "value": 31.968000000000004}, {"type": "map_at_5", "value": 33.554}, {"type": "mrr_at_1", "value": 31.818}, {"type": "mrr_at_10", "value": 39.550999999999995}, {"type": "mrr_at_100", "value": 40.54}, {"type": "mrr_at_1000", "value": 40.596}, {"type": "mrr_at_3", "value": 36.726}, {"type": "mrr_at_5", "value": 38.416}, {"type": "ndcg_at_1", "value": 31.818}, {"type": "ndcg_at_10", "value": 40.675}, {"type": "ndcg_at_100", "value": 46.548}, {"type": "ndcg_at_1000", "value": 49.126}, {"type": "ndcg_at_3", "value": 35.829}, {"type": "ndcg_at_5", "value": 38.0}, {"type": "precision_at_1", "value": 31.818}, {"type": "precision_at_10", "value": 7.826}, {"type": "precision_at_100", "value": 1.538}, {"type": "precision_at_1000", "value": 0.24}, {"type": "precision_at_3", "value": 16.601}, {"type": "precision_at_5", "value": 12.095}, {"type": "recall_at_1", "value": 26.529000000000003}, {"type": "recall_at_10", "value": 51.03}, {"type": "recall_at_100", "value": 77.556}, {"type": "recall_at_1000", "value": 93.804}, {"type": "recall_at_3", "value": 36.986000000000004}, {"type": "recall_at_5", "value": 43.096000000000004}, {"type": "map_at_1", "value": 23.480999999999998}, {"type": "map_at_10", "value": 30.817}, {"type": "map_at_100", "value": 31.838}, {"type": "map_at_1000", "value": 31.932}, {"type": "map_at_3", "value": 28.011999999999997}, {"type": "map_at_5", "value": 29.668}, {"type": "mrr_at_1", "value": 25.323}, {"type": "mrr_at_10", "value": 33.072}, {"type": "mrr_at_100", "value": 33.926}, {"type": "mrr_at_1000", "value": 33.993}, {"type": "mrr_at_3", "value": 30.436999999999998}, {"type": "mrr_at_5", "value": 32.092}, {"type": "ndcg_at_1", "value": 25.323}, {"type": "ndcg_at_10", "value": 35.514}, {"type": "ndcg_at_100", "value": 40.489000000000004}, {"type": "ndcg_at_1000", "value": 42.908}, {"type": "ndcg_at_3", "value": 30.092000000000002}, {"type": "ndcg_at_5", "value": 32.989000000000004}, {"type": "precision_at_1", "value": 25.323}, {"type": "precision_at_10", "value": 5.545}, {"type": "precision_at_100", "value": 0.861}, {"type": "precision_at_1000", "value": 0.117}, {"type": "precision_at_3", "value": 12.446}, {"type": "precision_at_5", "value": 9.131}, {"type": "recall_at_1", "value": 23.480999999999998}, {"type": "recall_at_10", "value": 47.825}, {"type": "recall_at_100", "value": 70.652}, {"type": "recall_at_1000", "value": 88.612}, {"type": "recall_at_3", "value": 33.537}, {"type": "recall_at_5", "value": 40.542}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 13.333999999999998}, {"type": "map_at_10", "value": 22.524}, {"type": "map_at_100", "value": 24.506}, {"type": "map_at_1000", "value": 24.715}, {"type": "map_at_3", "value": 19.022}, {"type": "map_at_5", "value": 20.693}, {"type": "mrr_at_1", "value": 29.186}, {"type": "mrr_at_10", "value": 41.22}, {"type": "mrr_at_100", "value": 42.16}, {"type": "mrr_at_1000", "value": 42.192}, {"type": "mrr_at_3", "value": 38.013000000000005}, {"type": "mrr_at_5", "value": 39.704}, {"type": "ndcg_at_1", "value": 29.186}, {"type": "ndcg_at_10", "value": 31.167}, {"type": "ndcg_at_100", "value": 38.879000000000005}, {"type": "ndcg_at_1000", "value": 42.376000000000005}, {"type": "ndcg_at_3", "value": 25.817}, {"type": "ndcg_at_5", "value": 27.377000000000002}, {"type": "precision_at_1", "value": 29.186}, {"type": "precision_at_10", "value": 9.693999999999999}, {"type": "precision_at_100", "value": 1.8030000000000002}, {"type": "precision_at_1000", "value": 0.246}, {"type": "precision_at_3", "value": 19.11}, {"type": "precision_at_5", "value": 14.344999999999999}, {"type": "recall_at_1", "value": 13.333999999999998}, {"type": "recall_at_10", "value": 37.092000000000006}, {"type": "recall_at_100", "value": 63.651}, {"type": "recall_at_1000", "value": 83.05}, {"type": "recall_at_3", "value": 23.74}, {"type": "recall_at_5", "value": 28.655}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 9.151}, {"type": "map_at_10", "value": 19.653000000000002}, {"type": "map_at_100", "value": 28.053}, {"type": "map_at_1000", "value": 29.709000000000003}, {"type": "map_at_3", "value": 14.191}, {"type": "map_at_5", "value": 16.456}, {"type": "mrr_at_1", "value": 66.25}, {"type": "mrr_at_10", "value": 74.4}, {"type": "mrr_at_100", "value": 74.715}, {"type": "mrr_at_1000", "value": 74.726}, {"type": "mrr_at_3", "value": 72.417}, {"type": "mrr_at_5", "value": 73.667}, {"type": "ndcg_at_1", "value": 54.25}, {"type": "ndcg_at_10", "value": 40.77}, {"type": "ndcg_at_100", "value": 46.359}, {"type": "ndcg_at_1000", "value": 54.193000000000005}, {"type": "ndcg_at_3", "value": 44.832}, {"type": "ndcg_at_5", "value": 42.63}, {"type": "precision_at_1", "value": 66.25}, {"type": "precision_at_10", "value": 32.175}, {"type": "precision_at_100", "value": 10.668}, {"type": "precision_at_1000", "value": 2.067}, {"type": "precision_at_3", "value": 47.667}, {"type": "precision_at_5", "value": 41.3}, {"type": "recall_at_1", "value": 9.151}, {"type": "recall_at_10", "value": 25.003999999999998}, {"type": "recall_at_100", "value": 52.976}, {"type": "recall_at_1000", "value": 78.315}, {"type": "recall_at_3", "value": 15.487}, {"type": "recall_at_5", "value": 18.999}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 51.89999999999999}, {"type": "f1", "value": 46.47777925067403}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 73.706}, {"type": "map_at_10", "value": 82.423}, {"type": "map_at_100", "value": 82.67999999999999}, {"type": "map_at_1000", "value": 82.694}, {"type": "map_at_3", "value": 81.328}, {"type": "map_at_5", "value": 82.001}, {"type": "mrr_at_1", "value": 79.613}, {"type": "mrr_at_10", "value": 87.07000000000001}, {"type": "mrr_at_100", "value": 87.169}, {"type": "mrr_at_1000", "value": 87.17}, {"type": "mrr_at_3", "value": 86.404}, {"type": "mrr_at_5", "value": 86.856}, {"type": "ndcg_at_1", "value": 79.613}, {"type": "ndcg_at_10", "value": 86.289}, {"type": "ndcg_at_100", "value": 87.201}, {"type": "ndcg_at_1000", "value": 87.428}, {"type": "ndcg_at_3", "value": 84.625}, {"type": "ndcg_at_5", "value": 85.53699999999999}, {"type": "precision_at_1", "value": 79.613}, {"type": "precision_at_10", "value": 10.399}, {"type": "precision_at_100", "value": 1.1079999999999999}, {"type": "precision_at_1000", "value": 0.11499999999999999}, {"type": "precision_at_3", "value": 32.473}, {"type": "precision_at_5", "value": 20.132}, {"type": "recall_at_1", "value": 73.706}, {"type": "recall_at_10", "value": 93.559}, {"type": "recall_at_100", "value": 97.188}, {"type": "recall_at_1000", "value": 98.555}, {"type": "recall_at_3", "value": 88.98700000000001}, {"type": "recall_at_5", "value": 91.373}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 19.841}, {"type": "map_at_10", "value": 32.643}, {"type": "map_at_100", "value": 34.575}, {"type": "map_at_1000", "value": 34.736}, {"type": "map_at_3", "value": 28.317999999999998}, {"type": "map_at_5", "value": 30.964000000000002}, {"type": "mrr_at_1", "value": 39.660000000000004}, {"type": "mrr_at_10", "value": 48.620000000000005}, {"type": "mrr_at_100", "value": 49.384}, {"type": "mrr_at_1000", "value": 49.415}, {"type": "mrr_at_3", "value": 45.988}, {"type": "mrr_at_5", "value": 47.361}, {"type": "ndcg_at_1", "value": 39.660000000000004}, {"type": "ndcg_at_10", "value": 40.646}, {"type": "ndcg_at_100", "value": 47.657}, {"type": "ndcg_at_1000", "value": 50.428}, {"type": "ndcg_at_3", "value": 36.689}, {"type": "ndcg_at_5", "value": 38.211}, {"type": "precision_at_1", "value": 39.660000000000004}, {"type": "precision_at_10", "value": 11.235000000000001}, {"type": "precision_at_100", "value": 1.8530000000000002}, {"type": "precision_at_1000", "value": 0.23600000000000002}, {"type": "precision_at_3", "value": 24.587999999999997}, {"type": "precision_at_5", "value": 18.395}, {"type": "recall_at_1", "value": 19.841}, {"type": "recall_at_10", "value": 48.135}, {"type": "recall_at_100", "value": 74.224}, {"type": "recall_at_1000", "value": 90.826}, {"type": "recall_at_3", "value": 33.536}, {"type": "recall_at_5", "value": 40.311}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 40.358}, {"type": "map_at_10", "value": 64.497}, {"type": "map_at_100", "value": 65.362}, {"type": "map_at_1000", "value": 65.41900000000001}, {"type": "map_at_3", "value": 61.06700000000001}, {"type": "map_at_5", "value": 63.317}, {"type": "mrr_at_1", "value": 80.716}, {"type": "mrr_at_10", "value": 86.10799999999999}, {"type": "mrr_at_100", "value": 86.265}, {"type": "mrr_at_1000", "value": 86.27}, {"type": "mrr_at_3", "value": 85.271}, {"type": "mrr_at_5", "value": 85.82499999999999}, {"type": "ndcg_at_1", "value": 80.716}, {"type": "ndcg_at_10", "value": 72.597}, {"type": "ndcg_at_100", "value": 75.549}, {"type": "ndcg_at_1000", "value": 76.61}, {"type": "ndcg_at_3", "value": 67.874}, {"type": "ndcg_at_5", "value": 70.655}, {"type": "precision_at_1", "value": 80.716}, {"type": "precision_at_10", "value": 15.148}, {"type": "precision_at_100", "value": 1.745}, {"type": "precision_at_1000", "value": 0.188}, {"type": "precision_at_3", "value": 43.597}, {"type": "precision_at_5", "value": 28.351}, {"type": "recall_at_1", "value": 40.358}, {"type": "recall_at_10", "value": 75.739}, {"type": "recall_at_100", "value": 87.259}, {"type": "recall_at_1000", "value": 94.234}, {"type": "recall_at_3", "value": 65.39500000000001}, {"type": "recall_at_5", "value": 70.878}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 90.80799999999998}, {"type": "ap", "value": 86.81350378180757}, {"type": "f1", "value": 90.79901248314215}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "dev", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 22.096}, {"type": "map_at_10", "value": 34.384}, {"type": "map_at_100", "value": 35.541}, {"type": "map_at_1000", "value": 35.589999999999996}, {"type": "map_at_3", "value": 30.496000000000002}, {"type": "map_at_5", "value": 32.718}, {"type": "mrr_at_1", "value": 22.750999999999998}, {"type": "mrr_at_10", "value": 35.024}, {"type": "mrr_at_100", "value": 36.125}, {"type": "mrr_at_1000", "value": 36.168}, {"type": "mrr_at_3", "value": 31.225}, {"type": "mrr_at_5", "value": 33.416000000000004}, {"type": "ndcg_at_1", "value": 22.750999999999998}, {"type": "ndcg_at_10", "value": 41.351}, {"type": "ndcg_at_100", "value": 46.92}, {"type": "ndcg_at_1000", "value": 48.111}, {"type": "ndcg_at_3", "value": 33.439}, {"type": "ndcg_at_5", "value": 37.407000000000004}, {"type": "precision_at_1", "value": 22.750999999999998}, {"type": "precision_at_10", "value": 6.564}, {"type": "precision_at_100", "value": 0.935}, {"type": "precision_at_1000", "value": 0.104}, {"type": "precision_at_3", "value": 14.288}, {"type": "precision_at_5", "value": 10.581999999999999}, {"type": "recall_at_1", "value": 22.096}, {"type": "recall_at_10", "value": 62.771}, {"type": "recall_at_100", "value": 88.529}, {"type": "recall_at_1000", "value": 97.55}, {"type": "recall_at_3", "value": 41.245}, {"type": "recall_at_5", "value": 50.788}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 94.16780665754673}, {"type": "f1", "value": 93.96331194859894}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 76.90606475148198}, {"type": "f1", "value": 58.58344986604187}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 76.14660390047075}, {"type": "f1", "value": 74.31533923533614}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 80.16139878950908}, {"type": "f1", "value": 80.18532656824924}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 32.949880906135085}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 31.56300351524862}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 31.196521894371315}, {"type": "mrr", "value": 32.22644231694389}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 6.783}, {"type": "map_at_10", "value": 14.549000000000001}, {"type": "map_at_100", "value": 18.433}, {"type": "map_at_1000", "value": 19.949}, {"type": "map_at_3", "value": 10.936}, {"type": "map_at_5", "value": 12.514}, {"type": "mrr_at_1", "value": 47.368}, {"type": "mrr_at_10", "value": 56.42}, {"type": "mrr_at_100", "value": 56.908}, {"type": "mrr_at_1000", "value": 56.95}, {"type": "mrr_at_3", "value": 54.283}, {"type": "mrr_at_5", "value": 55.568}, {"type": "ndcg_at_1", "value": 45.666000000000004}, {"type": "ndcg_at_10", "value": 37.389}, {"type": "ndcg_at_100", "value": 34.253}, {"type": "ndcg_at_1000", "value": 43.059999999999995}, {"type": "ndcg_at_3", "value": 42.725}, {"type": "ndcg_at_5", "value": 40.193}, {"type": "precision_at_1", "value": 47.368}, {"type": "precision_at_10", "value": 27.988000000000003}, {"type": "precision_at_100", "value": 8.672}, {"type": "precision_at_1000", "value": 2.164}, {"type": "precision_at_3", "value": 40.248}, {"type": "precision_at_5", "value": 34.737}, {"type": "recall_at_1", "value": 6.783}, {"type": "recall_at_10", "value": 17.838}, {"type": "recall_at_100", "value": 33.672000000000004}, {"type": "recall_at_1000", "value": 66.166}, {"type": "recall_at_3", "value": 11.849}, {"type": "recall_at_5", "value": 14.205000000000002}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 31.698999999999998}, {"type": "map_at_10", "value": 46.556}, {"type": "map_at_100", "value": 47.652}, {"type": "map_at_1000", "value": 47.68}, {"type": "map_at_3", "value": 42.492000000000004}, {"type": "map_at_5", "value": 44.763999999999996}, {"type": "mrr_at_1", "value": 35.747}, {"type": "mrr_at_10", "value": 49.242999999999995}, {"type": "mrr_at_100", "value": 50.052}, {"type": "mrr_at_1000", "value": 50.068}, {"type": "mrr_at_3", "value": 45.867000000000004}, {"type": "mrr_at_5", "value": 47.778999999999996}, {"type": "ndcg_at_1", "value": 35.717999999999996}, {"type": "ndcg_at_10", "value": 54.14600000000001}, {"type": "ndcg_at_100", "value": 58.672999999999995}, {"type": "ndcg_at_1000", "value": 59.279}, {"type": "ndcg_at_3", "value": 46.407}, {"type": "ndcg_at_5", "value": 50.181}, {"type": "precision_at_1", "value": 35.717999999999996}, {"type": "precision_at_10", "value": 8.844000000000001}, {"type": "precision_at_100", "value": 1.139}, {"type": "precision_at_1000", "value": 0.12}, {"type": "precision_at_3", "value": 20.993000000000002}, {"type": "precision_at_5", "value": 14.791000000000002}, {"type": "recall_at_1", "value": 31.698999999999998}, {"type": "recall_at_10", "value": 74.693}, {"type": "recall_at_100", "value": 94.15299999999999}, {"type": "recall_at_1000", "value": 98.585}, {"type": "recall_at_3", "value": 54.388999999999996}, {"type": "recall_at_5", "value": 63.08200000000001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 71.283}, {"type": "map_at_10", "value": 85.24000000000001}, {"type": "map_at_100", "value": 85.882}, {"type": "map_at_1000", "value": 85.897}, {"type": "map_at_3", "value": 82.326}, {"type": "map_at_5", "value": 84.177}, {"type": "mrr_at_1", "value": 82.21000000000001}, {"type": "mrr_at_10", "value": 88.228}, {"type": "mrr_at_100", "value": 88.32}, {"type": "mrr_at_1000", "value": 88.32}, {"type": "mrr_at_3", "value": 87.323}, {"type": "mrr_at_5", "value": 87.94800000000001}, {"type": "ndcg_at_1", "value": 82.17999999999999}, {"type": "ndcg_at_10", "value": 88.9}, {"type": "ndcg_at_100", "value": 90.079}, {"type": "ndcg_at_1000", "value": 90.158}, {"type": "ndcg_at_3", "value": 86.18299999999999}, {"type": "ndcg_at_5", "value": 87.71799999999999}, {"type": "precision_at_1", "value": 82.17999999999999}, {"type": "precision_at_10", "value": 13.464}, {"type": "precision_at_100", "value": 1.533}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 37.693}, {"type": "precision_at_5", "value": 24.792}, {"type": "recall_at_1", "value": 71.283}, {"type": "recall_at_10", "value": 95.742}, {"type": "recall_at_100", "value": 99.67200000000001}, {"type": "recall_at_1000", "value": 99.981}, {"type": "recall_at_3", "value": 87.888}, {"type": "recall_at_5", "value": 92.24}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 56.24267063669042}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 62.88056988932578}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 4.903}, {"type": "map_at_10", "value": 13.202}, {"type": "map_at_100", "value": 15.5}, {"type": "map_at_1000", "value": 15.870999999999999}, {"type": "map_at_3", "value": 9.407}, {"type": "map_at_5", "value": 11.238}, {"type": "mrr_at_1", "value": 24.2}, {"type": "mrr_at_10", "value": 35.867}, {"type": "mrr_at_100", "value": 37.001}, {"type": "mrr_at_1000", "value": 37.043}, {"type": "mrr_at_3", "value": 32.5}, {"type": "mrr_at_5", "value": 34.35}, {"type": "ndcg_at_1", "value": 24.2}, {"type": "ndcg_at_10", "value": 21.731}, {"type": "ndcg_at_100", "value": 30.7}, {"type": "ndcg_at_1000", "value": 36.618}, {"type": "ndcg_at_3", "value": 20.72}, {"type": "ndcg_at_5", "value": 17.954}, {"type": "precision_at_1", "value": 24.2}, {"type": "precision_at_10", "value": 11.33}, {"type": "precision_at_100", "value": 2.4410000000000003}, {"type": "precision_at_1000", "value": 0.386}, {"type": "precision_at_3", "value": 19.667}, {"type": "precision_at_5", "value": 15.86}, {"type": "recall_at_1", "value": 4.903}, {"type": "recall_at_10", "value": 22.962}, {"type": "recall_at_100", "value": 49.563}, {"type": "recall_at_1000", "value": 78.238}, {"type": "recall_at_3", "value": 11.953}, {"type": "recall_at_5", "value": 16.067999999999998}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.12694254604078}, {"type": "cos_sim_spearman", "value": 80.30141815181918}, {"type": "euclidean_pearson", "value": 81.34015449877128}, {"type": "euclidean_spearman", "value": 80.13984197010849}, {"type": "manhattan_pearson", "value": 81.31767068124086}, {"type": "manhattan_spearman", "value": 80.11720513114103}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.13112984010417}, {"type": "cos_sim_spearman", "value": 78.03063573402875}, {"type": "euclidean_pearson", "value": 83.51928418844804}, {"type": "euclidean_spearman", "value": 78.4045235411144}, {"type": "manhattan_pearson", "value": 83.49981637388689}, {"type": "manhattan_spearman", "value": 78.4042575139372}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.50327987379504}, {"type": "cos_sim_spearman", "value": 84.18556767756205}, {"type": "euclidean_pearson", "value": 82.69684424327679}, {"type": "euclidean_spearman", "value": 83.5368106038335}, {"type": "manhattan_pearson", "value": 82.57967581007374}, {"type": "manhattan_spearman", "value": 83.43009053133697}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.50756863007814}, {"type": "cos_sim_spearman", "value": 82.27204331279108}, {"type": "euclidean_pearson", "value": 81.39535251429741}, {"type": "euclidean_spearman", "value": 81.84386626336239}, {"type": "manhattan_pearson", "value": 81.34281737280695}, {"type": "manhattan_spearman", "value": 81.81149375673166}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.8727714856726}, {"type": "cos_sim_spearman", "value": 87.95738287792312}, {"type": "euclidean_pearson", "value": 86.62920602795887}, {"type": "euclidean_spearman", "value": 87.05207355381243}, {"type": "manhattan_pearson", "value": 86.53587918472225}, {"type": "manhattan_spearman", "value": 86.95382961029586}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.52240359769479}, {"type": "cos_sim_spearman", "value": 85.47685776238286}, {"type": "euclidean_pearson", "value": 84.25815333483058}, {"type": "euclidean_spearman", "value": 85.27415639683198}, {"type": "manhattan_pearson", "value": 84.29127757025637}, {"type": "manhattan_spearman", "value": 85.30226224917351}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.42501708915708}, {"type": "cos_sim_spearman", "value": 86.42276182795041}, {"type": "euclidean_pearson", "value": 86.5408207354761}, {"type": "euclidean_spearman", "value": 85.46096321750838}, {"type": "manhattan_pearson", "value": 86.54177303026881}, {"type": "manhattan_spearman", "value": 85.50313151916117}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 64.86521089250766}, {"type": "cos_sim_spearman", "value": 65.94868540323003}, {"type": "euclidean_pearson", "value": 67.16569626533084}, {"type": "euclidean_spearman", "value": 66.37667004134917}, {"type": "manhattan_pearson", "value": 67.1482365102333}, {"type": "manhattan_spearman", "value": 66.53240122580029}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.64746265365318}, {"type": "cos_sim_spearman", "value": 86.41888825906786}, {"type": "euclidean_pearson", "value": 85.27453642725811}, {"type": "euclidean_spearman", "value": 85.94095796602544}, {"type": "manhattan_pearson", "value": 85.28643660505334}, {"type": "manhattan_spearman", "value": 85.95028003260744}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 87.48903153618527}, {"type": "mrr", "value": 96.41081503826601}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 58.594}, {"type": "map_at_10", "value": 69.296}, {"type": "map_at_100", "value": 69.782}, {"type": "map_at_1000", "value": 69.795}, {"type": "map_at_3", "value": 66.23}, {"type": "map_at_5", "value": 68.293}, {"type": "mrr_at_1", "value": 61.667}, {"type": "mrr_at_10", "value": 70.339}, {"type": "mrr_at_100", "value": 70.708}, {"type": "mrr_at_1000", "value": 70.722}, {"type": "mrr_at_3", "value": 68.0}, {"type": "mrr_at_5", "value": 69.56700000000001}, {"type": "ndcg_at_1", "value": 61.667}, {"type": "ndcg_at_10", "value": 74.039}, {"type": "ndcg_at_100", "value": 76.103}, {"type": "ndcg_at_1000", "value": 76.47800000000001}, {"type": "ndcg_at_3", "value": 68.967}, {"type": "ndcg_at_5", "value": 71.96900000000001}, {"type": "precision_at_1", "value": 61.667}, {"type": "precision_at_10", "value": 9.866999999999999}, {"type": "precision_at_100", "value": 1.097}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 27.111}, {"type": "precision_at_5", "value": 18.2}, {"type": "recall_at_1", "value": 58.594}, {"type": "recall_at_10", "value": 87.422}, {"type": "recall_at_100", "value": 96.667}, {"type": "recall_at_1000", "value": 99.667}, {"type": "recall_at_3", "value": 74.217}, {"type": "recall_at_5", "value": 81.539}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.85049504950496}, {"type": "cos_sim_ap", "value": 96.33111544137081}, {"type": "cos_sim_f1", "value": 92.35443037974684}, {"type": "cos_sim_precision", "value": 93.53846153846153}, {"type": "cos_sim_recall", "value": 91.2}, {"type": "dot_accuracy", "value": 99.82376237623762}, {"type": "dot_ap", "value": 95.38082527310888}, {"type": "dot_f1", "value": 90.90909090909092}, {"type": "dot_precision", "value": 92.90187891440502}, {"type": "dot_recall", "value": 89.0}, {"type": "euclidean_accuracy", "value": 99.84851485148515}, {"type": "euclidean_ap", "value": 96.32316003996347}, {"type": "euclidean_f1", "value": 92.2071392659628}, {"type": "euclidean_precision", "value": 92.71991911021233}, {"type": "euclidean_recall", "value": 91.7}, {"type": "manhattan_accuracy", "value": 99.84851485148515}, {"type": "manhattan_ap", "value": 96.3655668249217}, {"type": "manhattan_f1", "value": 92.18356026222895}, {"type": "manhattan_precision", "value": 92.98067141403867}, {"type": "manhattan_recall", "value": 91.4}, {"type": "max_accuracy", "value": 99.85049504950496}, {"type": "max_ap", "value": 96.3655668249217}, {"type": "max_f1", "value": 92.35443037974684}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 65.94861371629051}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 35.009430451385}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 54.61164066427969}, {"type": "mrr", "value": 55.49710603938544}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 30.622620124907662}, {"type": "cos_sim_spearman", "value": 31.0678351356163}, {"type": "dot_pearson", "value": 30.863727693306814}, {"type": "dot_spearman", "value": 31.230306567021255}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.22}, {"type": "map_at_10", "value": 2.011}, {"type": "map_at_100", "value": 10.974}, {"type": "map_at_1000", "value": 25.819}, {"type": "map_at_3", "value": 0.6649999999999999}, {"type": "map_at_5", "value": 1.076}, {"type": "mrr_at_1", "value": 86.0}, {"type": "mrr_at_10", "value": 91.8}, {"type": "mrr_at_100", "value": 91.8}, {"type": "mrr_at_1000", "value": 91.8}, {"type": "mrr_at_3", "value": 91.0}, {"type": "mrr_at_5", "value": 91.8}, {"type": "ndcg_at_1", "value": 82.0}, {"type": "ndcg_at_10", "value": 78.07300000000001}, {"type": "ndcg_at_100", "value": 58.231}, {"type": "ndcg_at_1000", "value": 51.153000000000006}, {"type": "ndcg_at_3", "value": 81.123}, {"type": "ndcg_at_5", "value": 81.059}, {"type": "precision_at_1", "value": 86.0}, {"type": "precision_at_10", "value": 83.0}, {"type": "precision_at_100", "value": 59.38}, {"type": "precision_at_1000", "value": 22.55}, {"type": "precision_at_3", "value": 87.333}, {"type": "precision_at_5", "value": 86.8}, {"type": "recall_at_1", "value": 0.22}, {"type": "recall_at_10", "value": 2.2079999999999997}, {"type": "recall_at_100", "value": 14.069}, {"type": "recall_at_1000", "value": 47.678}, {"type": "recall_at_3", "value": 0.7040000000000001}, {"type": "recall_at_5", "value": 1.161}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 2.809}, {"type": "map_at_10", "value": 10.394}, {"type": "map_at_100", "value": 16.598}, {"type": "map_at_1000", "value": 18.142}, {"type": "map_at_3", "value": 5.572}, {"type": "map_at_5", "value": 7.1370000000000005}, {"type": "mrr_at_1", "value": 32.653}, {"type": "mrr_at_10", "value": 46.564}, {"type": "mrr_at_100", "value": 47.469}, {"type": "mrr_at_1000", "value": 47.469}, {"type": "mrr_at_3", "value": 42.177}, {"type": "mrr_at_5", "value": 44.524}, {"type": "ndcg_at_1", "value": 30.612000000000002}, {"type": "ndcg_at_10", "value": 25.701}, {"type": "ndcg_at_100", "value": 37.532}, {"type": "ndcg_at_1000", "value": 48.757}, {"type": "ndcg_at_3", "value": 28.199999999999996}, {"type": "ndcg_at_5", "value": 25.987}, {"type": "precision_at_1", "value": 32.653}, {"type": "precision_at_10", "value": 23.469}, {"type": "precision_at_100", "value": 7.9799999999999995}, {"type": "precision_at_1000", "value": 1.5350000000000001}, {"type": "precision_at_3", "value": 29.932}, {"type": "precision_at_5", "value": 26.122}, {"type": "recall_at_1", "value": 2.809}, {"type": "recall_at_10", "value": 16.887}, {"type": "recall_at_100", "value": 48.67}, {"type": "recall_at_1000", "value": 82.89699999999999}, {"type": "recall_at_3", "value": 6.521000000000001}, {"type": "recall_at_5", "value": 9.609}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 71.57860000000001}, {"type": "ap", "value": 13.82629211536393}, {"type": "f1", "value": 54.59860966183956}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 59.38030560271647}, {"type": "f1", "value": 59.69685552567865}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 51.4736717043405}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 86.92853311080646}, {"type": "cos_sim_ap", "value": 77.67872502591382}, {"type": "cos_sim_f1", "value": 70.33941236068895}, {"type": "cos_sim_precision", "value": 67.63273258645884}, {"type": "cos_sim_recall", "value": 73.27176781002639}, {"type": "dot_accuracy", "value": 85.79603027954938}, {"type": "dot_ap", "value": 73.73786190233379}, {"type": "dot_f1", "value": 67.3437901774235}, {"type": "dot_precision", "value": 65.67201604814443}, {"type": "dot_recall", "value": 69.10290237467018}, {"type": "euclidean_accuracy", "value": 86.94045419324074}, {"type": "euclidean_ap", "value": 77.6687791535167}, {"type": "euclidean_f1", "value": 70.47209214023542}, {"type": "euclidean_precision", "value": 67.7207492094381}, {"type": "euclidean_recall", "value": 73.45646437994723}, {"type": "manhattan_accuracy", "value": 86.87488823985218}, {"type": "manhattan_ap", "value": 77.63373392430728}, {"type": "manhattan_f1", "value": 70.40920716112532}, {"type": "manhattan_precision", "value": 68.31265508684864}, {"type": "manhattan_recall", "value": 72.63852242744063}, {"type": "max_accuracy", "value": 86.94045419324074}, {"type": "max_ap", "value": 77.67872502591382}, {"type": "max_f1", "value": 70.47209214023542}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 88.67155664221679}, {"type": "cos_sim_ap", "value": 85.64591703003417}, {"type": "cos_sim_f1", "value": 77.59531005352656}, {"type": "cos_sim_precision", "value": 73.60967184801382}, {"type": "cos_sim_recall", "value": 82.03726516784724}, {"type": "dot_accuracy", "value": 88.41541506578181}, {"type": "dot_ap", "value": 84.6482788957769}, {"type": "dot_f1", "value": 77.04748541466657}, {"type": "dot_precision", "value": 74.02440754931176}, {"type": "dot_recall", "value": 80.3279950723745}, {"type": "euclidean_accuracy", "value": 88.63080684596576}, {"type": "euclidean_ap", "value": 85.44570045321562}, {"type": "euclidean_f1", "value": 77.28769403336106}, {"type": "euclidean_precision", "value": 72.90600040958427}, {"type": "euclidean_recall", "value": 82.22975053895904}, {"type": "manhattan_accuracy", "value": 88.59393798269105}, {"type": "manhattan_ap", "value": 85.40271361038187}, {"type": "manhattan_f1", "value": 77.17606419344392}, {"type": "manhattan_precision", "value": 72.4447747078295}, {"type": "manhattan_recall", "value": 82.5685247921158}, {"type": "max_accuracy", "value": 88.67155664221679}, {"type": "max_ap", "value": 85.64591703003417}, {"type": "max_f1", "value": 77.59531005352656}]}]}]} | hsikchi/dump | null | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"en",
"arxiv:2401.03462",
"arxiv:2312.15503",
"arxiv:2311.13534",
"arxiv:2310.07554",
"arxiv:2309.07597",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T04:40:15+00:00 | [
"2401.03462",
"2312.15503",
"2311.13534",
"2310.07554",
"2309.07597"
] | [
"en"
] | TAGS
#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #mteb #en #arxiv-2401.03462 #arxiv-2312.15503 #arxiv-2311.13534 #arxiv-2310.07554 #arxiv-2309.07597 #license-mit #model-index #endpoints_compatible #region-us
| FlagEmbedding
=============
####
[Model List](#model-list) |
[FAQ](#frequently-asked-questions) |
[Usage](#usage) |
[Evaluation](#evaluation) |
[Train](#train) |
[Contact](#contact) |
[Citation](#citation) |
[License](#license)
For more details please refer to our Github: FlagEmbedding.
If you are looking for a model that supports more languages, longer texts, and other retrieval methods, you can try using bge-m3.
English | 中文
FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently:
* Long-Context LLM: Activation Beacon
* Fine-tuning of LM : LM-Cocktail
* Dense Retrieval: BGE-M3, LLM Embedder, BGE Embedding
* Reranker Model: BGE Reranker
* Benchmark: C-MTEB
News
----
* 1/30/2024: Release BGE-M3, a new member to BGE model series! M3 stands for Multi-linguality (100+ languages), Multi-granularities (input length up to 8192), Multi-Functionality (unification of dense, lexical, multi-vec/colbert retrieval).
It is the first embedding model which supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks.
Technical Report and Code. :fire:
* 1/9/2024: Release Activation-Beacon, an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. Technical Report :fire:
* 12/24/2023: Release LLaRA, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. Technical Report :fire:
* 11/23/2023: Release LM-Cocktail, a method to maintain general capabilities during fine-tuning by merging multiple language models. Technical Report :fire:
* 10/12/2023: Release LLM-Embedder, a unified embedding model to support diverse retrieval augmentation needs for LLMs. Technical Report
* 09/15/2023: The technical report and massive training data of BGE has been released
* 09/12/2023: New models:
+ New reranker model: release cross-encoder models 'BAAI/bge-reranker-base' and 'BAAI/bge-reranker-large', which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
+ update embedding model: release 'bge-\*-v1.5' embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
More
* 09/07/2023: Update fine-tune code: Add script to mine hard negatives and support adding instruction during fine-tuning.
* 08/09/2023: BGE Models are integrated into Langchain, you can use it like this; C-MTEB leaderboard is available.
* 08/05/2023: Release base-scale and small-scale models, best performance among the models of the same size
* 08/02/2023: Release 'bge-large-\*'(short for BAAI General Embedding) Models, rank 1st on MTEB and C-MTEB benchmark! :tada: :tada:
* 08/01/2023: We release the Chinese Massive Text Embedding Benchmark (C-MTEB), consisting of 31 test dataset.
Model List
----------
'bge' is short for 'BAAI general embedding'.
[1]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, no instruction needs to be added to passages.
[2]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
All models have been uploaded to Huggingface Hub, and you can see them at URL
If you cannot open the Huggingface Hub, you also can download the models at URL .
Frequently asked questions
--------------------------
1. How to fine-tune bge embedding model?
Following this example to prepare data and fine-tune your model.
Some suggestions:
* Mine hard negatives following this example, which can improve the retrieval performance.
* If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
* If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
2. The similarity score between two dissimilar sentences is higher than 0.5
Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval [0.6, 1].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
what matters is the relative order of the scores, not the absolute value.
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
3. When does the query instruction need to be used
For the 'bge-\*-v1.5', we improve its retrieval ability when not using instruction.
No instruction only has a slight degradation in retrieval performance compared with using instruction.
So you can generate embedding without instruction in all cases for convenience.
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.
In all cases, the documents/passages do not need to add the instruction.
Usage
-----
### Usage for Embedding Model
Here are some examples for using 'bge' models with
FlagEmbedding, Sentence-Transformers, Langchain, or Huggingface Transformers.
#### Using FlagEmbedding
If it doesn't work for you, you can see FlagEmbedding for more methods to install FlagEmbedding.
For the value of the argument 'query\_instruction\_for\_retrieval', see Model List.
By default, FlagModel will use all available GPUs when encoding. Please set 'os.environ["CUDA\_VISIBLE\_DEVICES"]' to select specific GPUs.
You also can set 'os.environ["CUDA\_VISIBLE\_DEVICES"]=""' to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the 'bge' models with sentence-transformers:
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see Model List).
But the instruction is not needed for passages.
#### Using Langchain
You can use 'bge' in langchain like this:
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
#### Usage of the ONNX files
#### Usage via infinity
Its also possible to deploy the onnx files with the infinity\_emb pip package.
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
Get relevance scores (higher scores indicate more relevance):
#### Using Huggingface transformers
Evaluation
----------
'baai-general-embedding' models achieve state-of-the-art performance on both MTEB and C-MTEB leaderboard!
For more details and evaluation tools see our scripts.
* MTEB:
* C-MTEB:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to C\_MTEB for a detailed introduction.
* Reranking:
See C\_MTEB for evaluation script.
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
Train
-----
### BAAI Embedding
We pre-train the models using retromae and train them on large-scale pairs data using contrastive learning.
You can fine-tune the embedding model on your data following our examples.
We also provide a pre-train example.
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see baai\_general\_embedding.
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our example.
More details please refer to ./FlagEmbedding/reranker/URL
Contact
-------
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao(stxiao@URL) and Zheng Liu(liuzheng@URL).
If you find this repository useful, please consider giving a star :star: and citation
License
-------
FlagEmbedding is licensed under the MIT License. The released models can be used for commercial purposes free of charge.
| [
"#### \n\n[Model List](#model-list) | \n [FAQ](#frequently-asked-questions) |\n [Usage](#usage) |\n [Evaluation](#evaluation) |\n [Train](#train) |\n [Contact](#contact) |\n [Citation](#citation) |\n [License](#license)\n\n\nFor more details please refer to our Github: FlagEmbedding.\n\n\nIf you are looking for a model that supports more languages, longer texts, and other retrieval methods, you can try using bge-m3.\n\n\nEnglish | 中文\n\n\nFlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently:\n\n\n* Long-Context LLM: Activation Beacon\n* Fine-tuning of LM : LM-Cocktail\n* Dense Retrieval: BGE-M3, LLM Embedder, BGE Embedding\n* Reranker Model: BGE Reranker\n* Benchmark: C-MTEB\n\n\nNews\n----\n\n\n* 1/30/2024: Release BGE-M3, a new member to BGE model series! M3 stands for Multi-linguality (100+ languages), Multi-granularities (input length up to 8192), Multi-Functionality (unification of dense, lexical, multi-vec/colbert retrieval).\nIt is the first embedding model which supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks.\nTechnical Report and Code. :fire:\n* 1/9/2024: Release Activation-Beacon, an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. Technical Report :fire:\n* 12/24/2023: Release LLaRA, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. Technical Report :fire:\n* 11/23/2023: Release LM-Cocktail, a method to maintain general capabilities during fine-tuning by merging multiple language models. Technical Report :fire:\n* 10/12/2023: Release LLM-Embedder, a unified embedding model to support diverse retrieval augmentation needs for LLMs. Technical Report\n* 09/15/2023: The technical report and massive training data of BGE has been released\n* 09/12/2023: New models:\n\t+ New reranker model: release cross-encoder models 'BAAI/bge-reranker-base' and 'BAAI/bge-reranker-large', which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.\n\t+ update embedding model: release 'bge-\\*-v1.5' embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.\n\n\n\nMore\n* 09/07/2023: Update fine-tune code: Add script to mine hard negatives and support adding instruction during fine-tuning.\n* 08/09/2023: BGE Models are integrated into Langchain, you can use it like this; C-MTEB leaderboard is available.\n* 08/05/2023: Release base-scale and small-scale models, best performance among the models of the same size\n* 08/02/2023: Release 'bge-large-\\*'(short for BAAI General Embedding) Models, rank 1st on MTEB and C-MTEB benchmark! :tada: :tada:\n* 08/01/2023: We release the Chinese Massive Text Embedding Benchmark (C-MTEB), consisting of 31 test dataset.\n\n\n\nModel List\n----------\n\n\n'bge' is short for 'BAAI general embedding'.\n\n\n\n[1]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, no instruction needs to be added to passages.\n\n\n[2]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.\nFor examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.\n\n\nAll models have been uploaded to Huggingface Hub, and you can see them at URL\nIf you cannot open the Huggingface Hub, you also can download the models at URL .\n\n\nFrequently asked questions\n--------------------------\n\n\n\n1. How to fine-tune bge embedding model?\nFollowing this example to prepare data and fine-tune your model.\nSome suggestions:\n\n\n* Mine hard negatives following this example, which can improve the retrieval performance.\n* If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.\n* If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.\n\n\n\n\n2. The similarity score between two dissimilar sentences is higher than 0.5\nSuggest to use bge v1.5, which alleviates the issue of the similarity distribution.\n\n\nSince we finetune the models by contrastive learning with a temperature of 0.01,\nthe similarity distribution of the current BGE model is about in the interval [0.6, 1].\nSo a similarity score greater than 0.5 does not indicate that the two sentences are similar.\n\n\nFor downstream tasks, such as passage retrieval or semantic similarity,\nwhat matters is the relative order of the scores, not the absolute value.\nIf you need to filter similar sentences based on a similarity threshold,\nplease select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).\n\n\n\n\n3. When does the query instruction need to be used\nFor the 'bge-\\*-v1.5', we improve its retrieval ability when not using instruction.\nNo instruction only has a slight degradation in retrieval performance compared with using instruction.\nSo you can generate embedding without instruction in all cases for convenience.\n\n\nFor a retrieval task that uses short queries to find long related documents,\nit is recommended to add instructions for these short queries.\nThe best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.\nIn all cases, the documents/passages do not need to add the instruction.\n\n\n\nUsage\n-----",
"### Usage for Embedding Model\n\n\nHere are some examples for using 'bge' models with\nFlagEmbedding, Sentence-Transformers, Langchain, or Huggingface Transformers.",
"#### Using FlagEmbedding\n\n\nIf it doesn't work for you, you can see FlagEmbedding for more methods to install FlagEmbedding.\n\n\nFor the value of the argument 'query\\_instruction\\_for\\_retrieval', see Model List.\n\n\nBy default, FlagModel will use all available GPUs when encoding. Please set 'os.environ[\"CUDA\\_VISIBLE\\_DEVICES\"]' to select specific GPUs.\nYou also can set 'os.environ[\"CUDA\\_VISIBLE\\_DEVICES\"]=\"\"' to make all GPUs unavailable.",
"#### Using Sentence-Transformers\n\n\nYou can also use the 'bge' models with sentence-transformers:\n\n\nFor s2p(short query to long passage) retrieval task,\neach short query should start with an instruction (instructions see Model List).\nBut the instruction is not needed for passages.",
"#### Using Langchain\n\n\nYou can use 'bge' in langchain like this:",
"#### Using HuggingFace Transformers\n\n\nWith the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.",
"#### Usage of the ONNX files",
"#### Usage via infinity\n\n\nIts also possible to deploy the onnx files with the infinity\\_emb pip package.",
"### Usage for Reranker\n\n\nDifferent from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.\nYou can get a relevance score by inputting query and passage to the reranker.\nThe reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.",
"#### Using FlagEmbedding\n\n\nGet relevance scores (higher scores indicate more relevance):",
"#### Using Huggingface transformers\n\n\nEvaluation\n----------\n\n\n'baai-general-embedding' models achieve state-of-the-art performance on both MTEB and C-MTEB leaderboard!\nFor more details and evaluation tools see our scripts.\n\n\n* MTEB:\n\n\n\n* C-MTEB: \n\nWe create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.\nPlease refer to C\\_MTEB for a detailed introduction.\n\n\n\n* Reranking:\nSee C\\_MTEB for evaluation script.\n\n\n\n\\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks\n\n\nTrain\n-----",
"### BAAI Embedding\n\n\nWe pre-train the models using retromae and train them on large-scale pairs data using contrastive learning.\nYou can fine-tune the embedding model on your data following our examples.\nWe also provide a pre-train example.\nNote that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.\nMore training details for bge see baai\\_general\\_embedding.",
"### BGE Reranker\n\n\nCross-encoder will perform full-attention over the input pair,\nwhich is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.\nTherefore, it can be used to re-rank the top-k documents returned by embedding model.\nWe train the cross-encoder on a multilingual pair data,\nThe data format is the same as embedding model, so you can fine-tune it easily following our example.\nMore details please refer to ./FlagEmbedding/reranker/URL\n\n\nContact\n-------\n\n\nIf you have any question or suggestion related to this project, feel free to open an issue or pull request.\nYou also can email Shitao Xiao(stxiao@URL) and Zheng Liu(liuzheng@URL).\n\n\nIf you find this repository useful, please consider giving a star :star: and citation\n\n\nLicense\n-------\n\n\nFlagEmbedding is licensed under the MIT License. The released models can be used for commercial purposes free of charge."
] | [
"TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #mteb #en #arxiv-2401.03462 #arxiv-2312.15503 #arxiv-2311.13534 #arxiv-2310.07554 #arxiv-2309.07597 #license-mit #model-index #endpoints_compatible #region-us \n",
"#### \n\n[Model List](#model-list) | \n [FAQ](#frequently-asked-questions) |\n [Usage](#usage) |\n [Evaluation](#evaluation) |\n [Train](#train) |\n [Contact](#contact) |\n [Citation](#citation) |\n [License](#license)\n\n\nFor more details please refer to our Github: FlagEmbedding.\n\n\nIf you are looking for a model that supports more languages, longer texts, and other retrieval methods, you can try using bge-m3.\n\n\nEnglish | 中文\n\n\nFlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently:\n\n\n* Long-Context LLM: Activation Beacon\n* Fine-tuning of LM : LM-Cocktail\n* Dense Retrieval: BGE-M3, LLM Embedder, BGE Embedding\n* Reranker Model: BGE Reranker\n* Benchmark: C-MTEB\n\n\nNews\n----\n\n\n* 1/30/2024: Release BGE-M3, a new member to BGE model series! M3 stands for Multi-linguality (100+ languages), Multi-granularities (input length up to 8192), Multi-Functionality (unification of dense, lexical, multi-vec/colbert retrieval).\nIt is the first embedding model which supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks.\nTechnical Report and Code. :fire:\n* 1/9/2024: Release Activation-Beacon, an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. Technical Report :fire:\n* 12/24/2023: Release LLaRA, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. Technical Report :fire:\n* 11/23/2023: Release LM-Cocktail, a method to maintain general capabilities during fine-tuning by merging multiple language models. Technical Report :fire:\n* 10/12/2023: Release LLM-Embedder, a unified embedding model to support diverse retrieval augmentation needs for LLMs. Technical Report\n* 09/15/2023: The technical report and massive training data of BGE has been released\n* 09/12/2023: New models:\n\t+ New reranker model: release cross-encoder models 'BAAI/bge-reranker-base' and 'BAAI/bge-reranker-large', which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.\n\t+ update embedding model: release 'bge-\\*-v1.5' embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.\n\n\n\nMore\n* 09/07/2023: Update fine-tune code: Add script to mine hard negatives and support adding instruction during fine-tuning.\n* 08/09/2023: BGE Models are integrated into Langchain, you can use it like this; C-MTEB leaderboard is available.\n* 08/05/2023: Release base-scale and small-scale models, best performance among the models of the same size\n* 08/02/2023: Release 'bge-large-\\*'(short for BAAI General Embedding) Models, rank 1st on MTEB and C-MTEB benchmark! :tada: :tada:\n* 08/01/2023: We release the Chinese Massive Text Embedding Benchmark (C-MTEB), consisting of 31 test dataset.\n\n\n\nModel List\n----------\n\n\n'bge' is short for 'BAAI general embedding'.\n\n\n\n[1]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, no instruction needs to be added to passages.\n\n\n[2]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.\nFor examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.\n\n\nAll models have been uploaded to Huggingface Hub, and you can see them at URL\nIf you cannot open the Huggingface Hub, you also can download the models at URL .\n\n\nFrequently asked questions\n--------------------------\n\n\n\n1. How to fine-tune bge embedding model?\nFollowing this example to prepare data and fine-tune your model.\nSome suggestions:\n\n\n* Mine hard negatives following this example, which can improve the retrieval performance.\n* If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.\n* If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.\n\n\n\n\n2. The similarity score between two dissimilar sentences is higher than 0.5\nSuggest to use bge v1.5, which alleviates the issue of the similarity distribution.\n\n\nSince we finetune the models by contrastive learning with a temperature of 0.01,\nthe similarity distribution of the current BGE model is about in the interval [0.6, 1].\nSo a similarity score greater than 0.5 does not indicate that the two sentences are similar.\n\n\nFor downstream tasks, such as passage retrieval or semantic similarity,\nwhat matters is the relative order of the scores, not the absolute value.\nIf you need to filter similar sentences based on a similarity threshold,\nplease select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).\n\n\n\n\n3. When does the query instruction need to be used\nFor the 'bge-\\*-v1.5', we improve its retrieval ability when not using instruction.\nNo instruction only has a slight degradation in retrieval performance compared with using instruction.\nSo you can generate embedding without instruction in all cases for convenience.\n\n\nFor a retrieval task that uses short queries to find long related documents,\nit is recommended to add instructions for these short queries.\nThe best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.\nIn all cases, the documents/passages do not need to add the instruction.\n\n\n\nUsage\n-----",
"### Usage for Embedding Model\n\n\nHere are some examples for using 'bge' models with\nFlagEmbedding, Sentence-Transformers, Langchain, or Huggingface Transformers.",
"#### Using FlagEmbedding\n\n\nIf it doesn't work for you, you can see FlagEmbedding for more methods to install FlagEmbedding.\n\n\nFor the value of the argument 'query\\_instruction\\_for\\_retrieval', see Model List.\n\n\nBy default, FlagModel will use all available GPUs when encoding. Please set 'os.environ[\"CUDA\\_VISIBLE\\_DEVICES\"]' to select specific GPUs.\nYou also can set 'os.environ[\"CUDA\\_VISIBLE\\_DEVICES\"]=\"\"' to make all GPUs unavailable.",
"#### Using Sentence-Transformers\n\n\nYou can also use the 'bge' models with sentence-transformers:\n\n\nFor s2p(short query to long passage) retrieval task,\neach short query should start with an instruction (instructions see Model List).\nBut the instruction is not needed for passages.",
"#### Using Langchain\n\n\nYou can use 'bge' in langchain like this:",
"#### Using HuggingFace Transformers\n\n\nWith the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.",
"#### Usage of the ONNX files",
"#### Usage via infinity\n\n\nIts also possible to deploy the onnx files with the infinity\\_emb pip package.",
"### Usage for Reranker\n\n\nDifferent from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.\nYou can get a relevance score by inputting query and passage to the reranker.\nThe reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.",
"#### Using FlagEmbedding\n\n\nGet relevance scores (higher scores indicate more relevance):",
"#### Using Huggingface transformers\n\n\nEvaluation\n----------\n\n\n'baai-general-embedding' models achieve state-of-the-art performance on both MTEB and C-MTEB leaderboard!\nFor more details and evaluation tools see our scripts.\n\n\n* MTEB:\n\n\n\n* C-MTEB: \n\nWe create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.\nPlease refer to C\\_MTEB for a detailed introduction.\n\n\n\n* Reranking:\nSee C\\_MTEB for evaluation script.\n\n\n\n\\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks\n\n\nTrain\n-----",
"### BAAI Embedding\n\n\nWe pre-train the models using retromae and train them on large-scale pairs data using contrastive learning.\nYou can fine-tune the embedding model on your data following our examples.\nWe also provide a pre-train example.\nNote that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.\nMore training details for bge see baai\\_general\\_embedding.",
"### BGE Reranker\n\n\nCross-encoder will perform full-attention over the input pair,\nwhich is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.\nTherefore, it can be used to re-rank the top-k documents returned by embedding model.\nWe train the cross-encoder on a multilingual pair data,\nThe data format is the same as embedding model, so you can fine-tune it easily following our example.\nMore details please refer to ./FlagEmbedding/reranker/URL\n\n\nContact\n-------\n\n\nIf you have any question or suggestion related to this project, feel free to open an issue or pull request.\nYou also can email Shitao Xiao(stxiao@URL) and Zheng Liu(liuzheng@URL).\n\n\nIf you find this repository useful, please consider giving a star :star: and citation\n\n\nLicense\n-------\n\n\nFlagEmbedding is licensed under the MIT License. The released models can be used for commercial purposes free of charge."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruBert-base-sberquad-0.005-len_4-filtered-negative-v2
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.005-len_4-filtered-negative-v2", "results": []}]} | Shalazary/ruBert-base-sberquad-0.005-len_4-filtered-negative-v2 | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ai-forever/ruBert-base",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T04:40:48+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
|
# ruBert-base-sberquad-0.005-len_4-filtered-negative-v2
This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# ruBert-base-sberquad-0.005-len_4-filtered-negative-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n",
"# ruBert-base-sberquad-0.005-len_4-filtered-negative-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | voidful/mamba-790m-base | null | [
"transformers",
"safetensors",
"mamba",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T04:40:55+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mamba #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mamba #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
token-classification | transformers |
## Model Specification
- Model: XLM-RoBERTa (base-sized model)
- Training Data:
- Combined Afrikaans & Hebrew corpora (Top 2 Languages)
- Training Details:
- Base configurations with a minor adjustment in learning rate (4.5e-5)
## Evaluation
- Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)
- Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 78.43\% Accuracy)
## POS Tags
- ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB | {"language": ["tl"], "datasets": ["universal_dependencies"], "metrics": ["f1"], "pipeline_tag": "token-classification"} | iceman2434/xlm-roberta-base-ft-udpos213-top2lang | null | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"tl",
"dataset:universal_dependencies",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T04:41:16+00:00 | [] | [
"tl"
] | TAGS
#transformers #pytorch #xlm-roberta #token-classification #tl #dataset-universal_dependencies #autotrain_compatible #endpoints_compatible #region-us
|
## Model Specification
- Model: XLM-RoBERTa (base-sized model)
- Training Data:
- Combined Afrikaans & Hebrew corpora (Top 2 Languages)
- Training Details:
- Base configurations with a minor adjustment in learning rate (4.5e-5)
## Evaluation
- Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)
- Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 78.43\% Accuracy)
## POS Tags
- ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB | [
"## Model Specification\n- Model: XLM-RoBERTa (base-sized model)\n- Training Data:\n - Combined Afrikaans & Hebrew corpora (Top 2 Languages)\n- Training Details:\n - Base configurations with a minor adjustment in learning rate (4.5e-5)",
"## Evaluation\n- Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)\n- Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 78.43\\% Accuracy)",
"## POS Tags\n- ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB"
] | [
"TAGS\n#transformers #pytorch #xlm-roberta #token-classification #tl #dataset-universal_dependencies #autotrain_compatible #endpoints_compatible #region-us \n",
"## Model Specification\n- Model: XLM-RoBERTa (base-sized model)\n- Training Data:\n - Combined Afrikaans & Hebrew corpora (Top 2 Languages)\n- Training Details:\n - Base configurations with a minor adjustment in learning rate (4.5e-5)",
"## Evaluation\n- Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)\n- Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 78.43\\% Accuracy)",
"## POS Tags\n- ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB"
] |
text-generation | transformers |
<p style="font-size:20px;" align="center">
🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News 🔥🔥🔥 [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper.
## Model Details
* **Model name**: WizardLM-2 8x22B
* **Developed by**: WizardLM@Microsoft AI
* **Model type**: Mixture of Experts (MoE)
* **Base model**: [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1)
* **Parameters**: 141B
* **Language(s)**: Multilingual
* **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2)
* **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
* **Paper**: WizardLM-2 (Upcoming)
* **License**: Apache2.0
## Model Capacities
**MT-Bench**
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
**Human Preferences Evaluation**
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
❗<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
| {"license": "apache-2.0"} | prince-canuma/WizardLM-2-8x22B | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T04:42:21+00:00 | [
"2304.12244",
"2306.08568",
"2308.09583"
] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<p style="font-size:20px;" align="center">
<a href="URL target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
<a href="URL target="_blank">HF Repo</a> • <a href="URL target="_blank">Github Repo</a> • <a href="URL target="_blank">Twitter</a> • <a href="URL target="_blank">[WizardLM]</a> • <a href="URL target="_blank">[WizardCoder]</a> • <a href="URL target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
Join our <a href="URL target="_blank">Discord</a>
</p>
## News [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our release blog post and upcoming paper.
## Model Details
* Model name: WizardLM-2 8x22B
* Developed by: WizardLM@Microsoft AI
* Model type: Mixture of Experts (MoE)
* Base model: mistral-community/Mixtral-8x22B-v0.1
* Parameters: 141B
* Language(s): Multilingual
* Blog: Introducing WizardLM-2
* Repository: URL
* Paper: WizardLM-2 (Upcoming)
* License: Apache2.0
## Model Capacities
MT-Bench
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="URL alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Human Preferences Evaluation
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="URL alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.
<p align="center" width="100%">
<a ><img src="URL alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo code on our github.
| [
"## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.",
"## Model Details\n\n* Model name: WizardLM-2 8x22B\n* Developed by: WizardLM@Microsoft AI\n* Model type: Mixture of Experts (MoE)\n* Base model: mistral-community/Mixtral-8x22B-v0.1\n* Parameters: 141B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0",
"## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github."
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.",
"## Model Details\n\n* Model name: WizardLM-2 8x22B\n* Developed by: WizardLM@Microsoft AI\n* Model type: Mixture of Experts (MoE)\n* Base model: mistral-community/Mixtral-8x22B-v0.1\n* Parameters: 141B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0",
"## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github."
] |
sentence-similarity | sentence-transformers |
# indobert-finetuned-indonli
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
It's based on the [IndoBERT](https://huggingface.co/indobenchmark/indobert-base-p2) model by [indobenchmark](https://github.com/IndoNLP/indonlu) and trained using the IndoNLI dataset on [Google Colab](https://colab.research.google.com/drive/1Qs1Eo5x5WsSCV6hU3MWbnl8hU3pZI4E5?usp=sharing).
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('indobert-finetuned-indonli')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('indobert-finetuned-indonli')
model = AutoModel.from_pretrained('indobert-finetuned-indonli')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=indobert-finetuned-indonli)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 646 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 64,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 65,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
This Model authored by:
https://huggingface.co/rahmanfadhil/indobert-finetuned-indonli | {"language": ["id"], "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "datasets": ["indonli"], "pipeline_tag": "sentence-similarity"} | cassador/indobert-embeddings | null | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"id",
"dataset:indonli",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T04:45:13+00:00 | [] | [
"id"
] | TAGS
#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #id #dataset-indonli #endpoints_compatible #region-us
|
# indobert-finetuned-indonli
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
It's based on the IndoBERT model by indobenchmark and trained using the IndoNLI dataset on Google Colab.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 646 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
This Model authored by:
URL | [
"# indobert-finetuned-indonli\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.\n\nIt's based on the IndoBERT model by indobenchmark and trained using the IndoNLI dataset on Google Colab.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 646 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors\n\n\nThis Model authored by:\nURL"
] | [
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #id #dataset-indonli #endpoints_compatible #region-us \n",
"# indobert-finetuned-indonli\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.\n\nIt's based on the IndoBERT model by indobenchmark and trained using the IndoNLI dataset on Google Colab.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 646 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors\n\n\nThis Model authored by:\nURL"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_all-seqsight_8192_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5173
- F1 Score: 0.8202
- Accuracy: 0.8203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.567 | 8.33 | 200 | 0.4913 | 0.7695 | 0.7704 |
| 0.4659 | 16.67 | 400 | 0.4646 | 0.7836 | 0.7840 |
| 0.4218 | 25.0 | 600 | 0.4396 | 0.8042 | 0.8042 |
| 0.3799 | 33.33 | 800 | 0.4136 | 0.8174 | 0.8174 |
| 0.347 | 41.67 | 1000 | 0.4267 | 0.8162 | 0.8164 |
| 0.3231 | 50.0 | 1200 | 0.4182 | 0.8212 | 0.8215 |
| 0.3042 | 58.33 | 1400 | 0.4172 | 0.8273 | 0.8274 |
| 0.2864 | 66.67 | 1600 | 0.4396 | 0.8248 | 0.825 |
| 0.2721 | 75.0 | 1800 | 0.4565 | 0.8143 | 0.8150 |
| 0.2584 | 83.33 | 2000 | 0.4550 | 0.8268 | 0.8270 |
| 0.2479 | 91.67 | 2200 | 0.4504 | 0.8280 | 0.8282 |
| 0.2387 | 100.0 | 2400 | 0.4396 | 0.8243 | 0.8247 |
| 0.2309 | 108.33 | 2600 | 0.4779 | 0.8264 | 0.8265 |
| 0.2218 | 116.67 | 2800 | 0.4826 | 0.8252 | 0.8257 |
| 0.2152 | 125.0 | 3000 | 0.5164 | 0.8155 | 0.8167 |
| 0.2086 | 133.33 | 3200 | 0.4924 | 0.8238 | 0.8242 |
| 0.2049 | 141.67 | 3400 | 0.5048 | 0.8249 | 0.8253 |
| 0.1991 | 150.0 | 3600 | 0.4793 | 0.8313 | 0.8314 |
| 0.1948 | 158.33 | 3800 | 0.5138 | 0.8270 | 0.8274 |
| 0.1882 | 166.67 | 4000 | 0.5478 | 0.8250 | 0.8257 |
| 0.1854 | 175.0 | 4200 | 0.5306 | 0.8192 | 0.8199 |
| 0.1811 | 183.33 | 4400 | 0.5312 | 0.8253 | 0.8255 |
| 0.1769 | 191.67 | 4600 | 0.5496 | 0.8211 | 0.8218 |
| 0.173 | 200.0 | 4800 | 0.5349 | 0.8228 | 0.8233 |
| 0.1708 | 208.33 | 5000 | 0.5519 | 0.8205 | 0.8213 |
| 0.1678 | 216.67 | 5200 | 0.5220 | 0.8272 | 0.8274 |
| 0.1642 | 225.0 | 5400 | 0.5353 | 0.8241 | 0.8243 |
| 0.1628 | 233.33 | 5600 | 0.5526 | 0.8235 | 0.8238 |
| 0.1599 | 241.67 | 5800 | 0.5991 | 0.8162 | 0.8171 |
| 0.1566 | 250.0 | 6000 | 0.5759 | 0.8235 | 0.8240 |
| 0.1549 | 258.33 | 6200 | 0.5601 | 0.8256 | 0.8258 |
| 0.152 | 266.67 | 6400 | 0.5925 | 0.8196 | 0.8204 |
| 0.1508 | 275.0 | 6600 | 0.5701 | 0.8273 | 0.8275 |
| 0.1483 | 283.33 | 6800 | 0.5979 | 0.8244 | 0.8248 |
| 0.1474 | 291.67 | 7000 | 0.5907 | 0.8235 | 0.8240 |
| 0.1452 | 300.0 | 7200 | 0.5868 | 0.8221 | 0.8225 |
| 0.1434 | 308.33 | 7400 | 0.5805 | 0.8224 | 0.8230 |
| 0.1424 | 316.67 | 7600 | 0.6191 | 0.8215 | 0.8221 |
| 0.1411 | 325.0 | 7800 | 0.5829 | 0.8251 | 0.8253 |
| 0.1392 | 333.33 | 8000 | 0.5949 | 0.8215 | 0.8218 |
| 0.1378 | 341.67 | 8200 | 0.6118 | 0.8228 | 0.8231 |
| 0.1391 | 350.0 | 8400 | 0.6015 | 0.8248 | 0.8252 |
| 0.1383 | 358.33 | 8600 | 0.5969 | 0.8274 | 0.8277 |
| 0.1357 | 366.67 | 8800 | 0.6152 | 0.8221 | 0.8225 |
| 0.1333 | 375.0 | 9000 | 0.6041 | 0.8242 | 0.8245 |
| 0.1336 | 383.33 | 9200 | 0.6000 | 0.8237 | 0.8240 |
| 0.1331 | 391.67 | 9400 | 0.6145 | 0.8233 | 0.8236 |
| 0.1335 | 400.0 | 9600 | 0.6035 | 0.8240 | 0.8243 |
| 0.1334 | 408.33 | 9800 | 0.6038 | 0.8237 | 0.8240 |
| 0.1324 | 416.67 | 10000 | 0.6102 | 0.8241 | 0.8245 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_8192_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_8192_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-16T04:48:37+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_300\_all-seqsight\_8192\_512\_30M-L32\_all
===========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5173
* F1 Score: 0.8202
* Accuracy: 0.8203
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-to-audio | transformers |
# music_generation_model
music_generation_model is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [TheBloke/openchat_3.5-GPTQ](https://huggingface.co/TheBloke/openchat_3.5-GPTQ)
* [asigalov61/Allegro-Music-Transformer](https://huggingface.co/asigalov61/Allegro-Music-Transformer)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: TheBloke/openchat_3.5-GPTQ
layer_range: [0, 32]
- model: asigalov61/Allegro-Music-Transformer
layer_range: [0, 32]
merge_method: slerp
base_model: TheBloke/openchat_3.5-GPTQ
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "nagayama0706/music_generation_model"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "TheBloke/openchat_3.5-GPTQ", "asigalov61/Allegro-Music-Transformer"], "base_model": ["TheBloke/openchat_3.5-GPTQ", "asigalov61/Allegro-Music-Transformer"], "pipeline_tag": "text-to-audio"} | nagayama0706/music_generation_model | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"TheBloke/openchat_3.5-GPTQ",
"asigalov61/Allegro-Music-Transformer",
"text-to-audio",
"base_model:TheBloke/openchat_3.5-GPTQ",
"base_model:asigalov61/Allegro-Music-Transformer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T04:49:22+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #TheBloke/openchat_3.5-GPTQ #asigalov61/Allegro-Music-Transformer #text-to-audio #base_model-TheBloke/openchat_3.5-GPTQ #base_model-asigalov61/Allegro-Music-Transformer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# music_generation_model
music_generation_model is a merge of the following models using LazyMergekit:
* TheBloke/openchat_3.5-GPTQ
* asigalov61/Allegro-Music-Transformer
## Configuration
## Usage
| [
"# music_generation_model\n\nmusic_generation_model is a merge of the following models using LazyMergekit:\n* TheBloke/openchat_3.5-GPTQ\n* asigalov61/Allegro-Music-Transformer",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #TheBloke/openchat_3.5-GPTQ #asigalov61/Allegro-Music-Transformer #text-to-audio #base_model-TheBloke/openchat_3.5-GPTQ #base_model-asigalov61/Allegro-Music-Transformer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# music_generation_model\n\nmusic_generation_model is a merge of the following models using LazyMergekit:\n* TheBloke/openchat_3.5-GPTQ\n* asigalov61/Allegro-Music-Transformer",
"## Configuration",
"## Usage"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | abhayesian/BobzillaV25 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T04:51:01+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-wo-kqa_golden-sft
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4124 | 0.96 | 6 | 1.0609 |
| 1.0531 | 1.92 | 12 | 0.7985 |
| 0.8342 | 2.88 | 18 | 0.7294 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["HuggingFaceH4/deita-10k-v0-sft"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mistral-7b-wo-kqa_golden-sft", "results": []}]} | Minbyul/mistral-7b-wo-kqa_golden-sft | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T04:54:10+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| mistral-7b-wo-kqa\_golden-sft
=============================
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7294
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.1.2
* Datasets 2.14.6
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
null | null |
# DavidAU/Typhon-Mixtral-v1-Q6_K-GGUF
This model was converted to GGUF format from [`Sao10K/Typhon-Mixtral-v1`](https://huggingface.co/Sao10K/Typhon-Mixtral-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sao10K/Typhon-Mixtral-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Typhon-Mixtral-v1-Q6_K-GGUF --model typhon-mixtral-v1.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Typhon-Mixtral-v1-Q6_K-GGUF --model typhon-mixtral-v1.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m typhon-mixtral-v1.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["llama-cpp", "gguf-my-repo"], "base_model": "mistralai/Mixtral-8x7B-v0.1"} | DavidAU/Typhon-Mixtral-v1-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-16T04:54:44+00:00 | [] | [
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #en #base_model-mistralai/Mixtral-8x7B-v0.1 #license-cc-by-nc-4.0 #region-us
|
# DavidAU/Typhon-Mixtral-v1-Q6_K-GGUF
This model was converted to GGUF format from 'Sao10K/Typhon-Mixtral-v1' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Typhon-Mixtral-v1-Q6_K-GGUF\nThis model was converted to GGUF format from 'Sao10K/Typhon-Mixtral-v1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #en #base_model-mistralai/Mixtral-8x7B-v0.1 #license-cc-by-nc-4.0 #region-us \n",
"# DavidAU/Typhon-Mixtral-v1-Q6_K-GGUF\nThis model was converted to GGUF format from 'Sao10K/Typhon-Mixtral-v1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
# WizardLM-2-8x22B - EXL2 2.25bpw
This is a 2.25bpw EXL2 quant of [microsoft/WizardLM-2-8x22B](https://huggingface.co/microsoft/WizardLM-2-8x22B)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 7.0 | 4.5859 |
| 6.0 | 4.6252 |
| 5.5 | 4.6493 |
| 5.0 | 4.6937 |
| 4.5 | 4.8029 |
| 4.0 | 4.9372 |
| 3.5 | 5.1336 |
| 3.25 | 5.3636 |
| 3.0 | 5.5468 |
| 2.75 | 5.8255 |
| 2.5 | 6.3362 |
| 2.25 | 7.7763 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
DATA_SET=/root/wikitext/wikitext-2-v1.parquet
# Set the model name and bit size
MODEL_NAME="WizardLM-2-8x22B"
BIT_PRECISIONS=(6.0 5.5 5.0 4.5 4.0 3.5 3.25 3.0 2.75 2.5 2.25)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
LOCAL_FOLDER="/root/models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
REMOTE_FOLDER="Dracones/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ ! -d "$LOCAL_FOLDER" ]; then
huggingface-cli download --local-dir-use-symlinks=False --local-dir "${LOCAL_FOLDER}" "${REMOTE_FOLDER}" >> /root/download.log 2>&1
fi
output=$(python test_inference.py -m "$LOCAL_FOLDER" -gs 40,40,40,40 -ed "$DATA_SET")
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
# rm -rf "${LOCAL_FOLDER}"
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="WizardLM-2-8x22B"
# Define variables
MODEL_DIR="/mnt/storage/models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["exl2"], "base_model": "microsoft/WizardLM-2-8x22B"} | Dracones/WizardLM-2-8x22B_exl2_2.25bpw | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"exl2",
"en",
"base_model:microsoft/WizardLM-2-8x22B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T04:55:37+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mixtral #text-generation #exl2 #en #base_model-microsoft/WizardLM-2-8x22B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| WizardLM-2-8x22B - EXL2 2.25bpw
===============================
This is a 2.25bpw EXL2 quant of microsoft/WizardLM-2-8x22B
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #exl2 #en #base_model-microsoft/WizardLM-2-8x22B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
text-generation | transformers |
# Yamshadowexperiment28T3qm7xp-7B
Yamshadowexperiment28T3qm7xp-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: automerger/YamshadowExperiment28-7B
- model: nlpguy/T3QM7XP
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Yamshadowexperiment28T3qm7xp-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]} | automerger/Yamshadowexperiment28T3qm7xp-7B | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T04:57:46+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Yamshadowexperiment28T3qm7xp-7B
Yamshadowexperiment28T3qm7xp-7B is an automated merge created by Maxime Labonne using the following configuration.
## Configuration
## Usage
| [
"# Yamshadowexperiment28T3qm7xp-7B\n\nYamshadowexperiment28T3qm7xp-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Yamshadowexperiment28T3qm7xp-7B\n\nYamshadowexperiment28T3qm7xp-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] |
null | null |
# DavidAU/Venomia-1.1-m7-Q6_K-GGUF
This model was converted to GGUF format from [`Sao10K/Venomia-1.1-m7`](https://huggingface.co/Sao10K/Venomia-1.1-m7) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sao10K/Venomia-1.1-m7) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Venomia-1.1-m7-Q6_K-GGUF --model venomia-1.1-m7.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Venomia-1.1-m7-Q6_K-GGUF --model venomia-1.1-m7.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m venomia-1.1-m7.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["llama-cpp", "gguf-my-repo"]} | DavidAU/Venomia-1.1-m7-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-16T04:59:15+00:00 | [] | [
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #en #license-cc-by-nc-4.0 #region-us
|
# DavidAU/Venomia-1.1-m7-Q6_K-GGUF
This model was converted to GGUF format from 'Sao10K/Venomia-1.1-m7' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Venomia-1.1-m7-Q6_K-GGUF\nThis model was converted to GGUF format from 'Sao10K/Venomia-1.1-m7' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #en #license-cc-by-nc-4.0 #region-us \n",
"# DavidAU/Venomia-1.1-m7-Q6_K-GGUF\nThis model was converted to GGUF format from 'Sao10K/Venomia-1.1-m7' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0534
- Answer: {'precision': 0.38023152270703475, 'recall': 0.5278121137206427, 'f1': 0.44202898550724634, 'number': 809}
- Header: {'precision': 0.3333333333333333, 'recall': 0.24369747899159663, 'f1': 0.2815533980582524, 'number': 119}
- Question: {'precision': 0.5214341387373344, 'recall': 0.6281690140845071, 'f1': 0.5698466780238501, 'number': 1065}
- Overall Precision: 0.4513
- Overall Recall: 0.5645
- Overall F1: 0.5016
- Overall Accuracy: 0.6341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.7733 | 1.0 | 10 | 1.5779 | {'precision': 0.03243847874720358, 'recall': 0.03584672435105068, 'f1': 0.03405754550792719, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.2723926380368098, 'recall': 0.2084507042253521, 'f1': 0.23617021276595745, 'number': 1065} | 0.1469 | 0.1259 | 0.1356 | 0.3498 |
| 1.4958 | 2.0 | 20 | 1.3947 | {'precision': 0.15568475452196381, 'recall': 0.2978986402966625, 'f1': 0.20449724225710647, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.24971493728620298, 'recall': 0.4112676056338028, 'f1': 0.310748492373182, 'number': 1065} | 0.2047 | 0.3407 | 0.2557 | 0.4093 |
| 1.32 | 3.0 | 30 | 1.2259 | {'precision': 0.2251798561151079, 'recall': 0.3868974042027194, 'f1': 0.28467485220554795, 'number': 809} | {'precision': 0.09090909090909091, 'recall': 0.05042016806722689, 'f1': 0.06486486486486487, 'number': 119} | {'precision': 0.3336864406779661, 'recall': 0.5915492957746479, 'f1': 0.4266847273958686, 'number': 1065} | 0.2838 | 0.4762 | 0.3556 | 0.4708 |
| 1.1874 | 4.0 | 40 | 1.1299 | {'precision': 0.25460992907801416, 'recall': 0.4437577255871446, 'f1': 0.3235691753041911, 'number': 809} | {'precision': 0.30864197530864196, 'recall': 0.21008403361344538, 'f1': 0.25, 'number': 119} | {'precision': 0.3852813852813853, 'recall': 0.5849765258215962, 'f1': 0.4645786726323639, 'number': 1065} | 0.3240 | 0.5053 | 0.3948 | 0.5607 |
| 1.079 | 5.0 | 50 | 1.0967 | {'precision': 0.28809523809523807, 'recall': 0.44870210135970334, 'f1': 0.35089415176413724, 'number': 809} | {'precision': 0.3170731707317073, 'recall': 0.2184873949579832, 'f1': 0.25870646766169153, 'number': 119} | {'precision': 0.4067073170731707, 'recall': 0.6262910798122066, 'f1': 0.4931608133086876, 'number': 1065} | 0.3541 | 0.5299 | 0.4245 | 0.5684 |
| 1.0153 | 6.0 | 60 | 1.0661 | {'precision': 0.32075471698113206, 'recall': 0.5043263288009888, 'f1': 0.39211917347429115, 'number': 809} | {'precision': 0.33783783783783783, 'recall': 0.21008403361344538, 'f1': 0.25906735751295334, 'number': 119} | {'precision': 0.5031055900621118, 'recall': 0.532394366197183, 'f1': 0.5173357664233575, 'number': 1065} | 0.4044 | 0.5018 | 0.4478 | 0.5887 |
| 0.9487 | 7.0 | 70 | 1.0371 | {'precision': 0.3273753527751646, 'recall': 0.43016069221260816, 'f1': 0.37179487179487175, 'number': 809} | {'precision': 0.28440366972477066, 'recall': 0.2605042016806723, 'f1': 0.2719298245614035, 'number': 119} | {'precision': 0.44015696533682147, 'recall': 0.631924882629108, 'f1': 0.5188897455666924, 'number': 1065} | 0.3895 | 0.5278 | 0.4482 | 0.5965 |
| 0.8939 | 8.0 | 80 | 1.0279 | {'precision': 0.3353711790393013, 'recall': 0.4746600741656366, 'f1': 0.39303991811668376, 'number': 809} | {'precision': 0.4166666666666667, 'recall': 0.21008403361344538, 'f1': 0.2793296089385475, 'number': 119} | {'precision': 0.4401008827238335, 'recall': 0.6553990610328638, 'f1': 0.5265937382119954, 'number': 1065} | 0.3966 | 0.5554 | 0.4628 | 0.6073 |
| 0.8226 | 9.0 | 90 | 1.0434 | {'precision': 0.36496980155306297, 'recall': 0.522867737948084, 'f1': 0.4298780487804878, 'number': 809} | {'precision': 0.2765957446808511, 'recall': 0.2184873949579832, 'f1': 0.24413145539906103, 'number': 119} | {'precision': 0.524451939291737, 'recall': 0.584037558685446, 'f1': 0.5526432696579298, 'number': 1065} | 0.4391 | 0.5374 | 0.4833 | 0.6047 |
| 0.8109 | 10.0 | 100 | 1.0504 | {'precision': 0.3830755232029117, 'recall': 0.5203955500618047, 'f1': 0.44129979035639416, 'number': 809} | {'precision': 0.3258426966292135, 'recall': 0.24369747899159663, 'f1': 0.27884615384615385, 'number': 119} | {'precision': 0.5186104218362283, 'recall': 0.5887323943661972, 'f1': 0.5514511873350924, 'number': 1065} | 0.4493 | 0.5404 | 0.4907 | 0.6087 |
| 0.7313 | 11.0 | 110 | 1.0353 | {'precision': 0.35545454545454547, 'recall': 0.48331273176761436, 'f1': 0.4096385542168675, 'number': 809} | {'precision': 0.34615384615384615, 'recall': 0.226890756302521, 'f1': 0.27411167512690354, 'number': 119} | {'precision': 0.486411149825784, 'recall': 0.6553990610328638, 'f1': 0.5584, 'number': 1065} | 0.4271 | 0.5600 | 0.4846 | 0.6283 |
| 0.7183 | 12.0 | 120 | 1.0649 | {'precision': 0.3668639053254438, 'recall': 0.5364647713226205, 'f1': 0.43574297188755023, 'number': 809} | {'precision': 0.35802469135802467, 'recall': 0.24369747899159663, 'f1': 0.29000000000000004, 'number': 119} | {'precision': 0.5118483412322274, 'recall': 0.6084507042253521, 'f1': 0.5559845559845559, 'number': 1065} | 0.4391 | 0.5575 | 0.4913 | 0.6293 |
| 0.6865 | 13.0 | 130 | 1.0692 | {'precision': 0.37521514629948366, 'recall': 0.5389369592088998, 'f1': 0.44241501775748354, 'number': 809} | {'precision': 0.38461538461538464, 'recall': 0.25210084033613445, 'f1': 0.30456852791878175, 'number': 119} | {'precision': 0.5404255319148936, 'recall': 0.596244131455399, 'f1': 0.5669642857142857, 'number': 1065} | 0.4559 | 0.5524 | 0.4995 | 0.6258 |
| 0.6566 | 14.0 | 140 | 1.0435 | {'precision': 0.3845446182152714, 'recall': 0.5166872682323856, 'f1': 0.4409282700421941, 'number': 809} | {'precision': 0.3488372093023256, 'recall': 0.25210084033613445, 'f1': 0.2926829268292683, 'number': 119} | {'precision': 0.5181747873163186, 'recall': 0.6291079812206573, 'f1': 0.568278201865988, 'number': 1065} | 0.4534 | 0.5610 | 0.5015 | 0.6295 |
| 0.6437 | 15.0 | 150 | 1.0534 | {'precision': 0.38023152270703475, 'recall': 0.5278121137206427, 'f1': 0.44202898550724634, 'number': 809} | {'precision': 0.3333333333333333, 'recall': 0.24369747899159663, 'f1': 0.2815533980582524, 'number': 119} | {'precision': 0.5214341387373344, 'recall': 0.6281690140845071, 'f1': 0.5698466780238501, 'number': 1065} | 0.4513 | 0.5645 | 0.5016 | 0.6341 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["funsd"], "base_model": "microsoft/layoutlm-base-uncased", "model-index": [{"name": "layoutlm-funsd", "results": []}]} | unhingedpanda/layoutlm-funsd | null | [
"transformers",
"tensorboard",
"safetensors",
"layoutlm",
"token-classification",
"generated_from_trainer",
"dataset:funsd",
"base_model:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T05:00:01+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #layoutlm #token-classification #generated_from_trainer #dataset-funsd #base_model-microsoft/layoutlm-base-uncased #license-mit #autotrain_compatible #endpoints_compatible #region-us
| layoutlm-funsd
==============
This model is a fine-tuned version of microsoft/layoutlm-base-uncased on the funsd dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0534
* Answer: {'precision': 0.38023152270703475, 'recall': 0.5278121137206427, 'f1': 0.44202898550724634, 'number': 809}
* Header: {'precision': 0.3333333333333333, 'recall': 0.24369747899159663, 'f1': 0.2815533980582524, 'number': 119}
* Question: {'precision': 0.5214341387373344, 'recall': 0.6281690140845071, 'f1': 0.5698466780238501, 'number': 1065}
* Overall Precision: 0.4513
* Overall Recall: 0.5645
* Overall F1: 0.5016
* Overall Accuracy: 0.6341
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #layoutlm #token-classification #generated_from_trainer #dataset-funsd #base_model-microsoft/layoutlm-base-uncased #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | pytorch |
# Model Card for wavecoder-ds-6.7b-GGUF
WaveCoder 🌊 is a series of large language models (LLMs) for the coding domain.
## Model Details
- WaveCoder-6.7b-ds = Trained using CodeOcean dataset
- WaveCoder-6.7b-pro = Trained using GPT-4 synthetic data
- WaveCoder-6.7b-ultra = Trained using enhanced GPT-4 synthetic data
### Model Description
WaveCoder 🌊 is a series of large language models (LLMs) for the coding domain, designed to solve relevant problems in the field of code through instruction-following learning. Its training dataset was generated from a subset of code-search-net data using a generator-discriminator framework based on LLMs that we proposed, covering four general code-related tasks: code generation, code summary, code translation, and code repair.
- **Developed by:** Yu, Zhaojian and Zhang, Xin and Shang, Ning and Huang, Yangyu and Xu, Can and Zhao, Yishujie and Hu, Wenxiang and Yin, Qiufeng
- **Model type:** Large Language Model
- **Language(s) (NLP):** English
- **License:** DeepSeek License (Model)
### Model Sources
- **Repository:** [https://huggingface.co/microsoft/wavecoder-ds-6.7b](https://huggingface.co/microsoft/wavecoder-ds-6.7b)
- **Paper :** [More Information Needed]
- **Demo :** [More Information Needed]
## Uses
Coding/Refactoring/Cleanup/Fixing Code
## Original: [https://huggingface.co/microsoft/wavecoder-ds-6.7b](https://huggingface.co/microsoft/wavecoder-ds-6.7b) | {"language": ["en"], "license": "mit", "library_name": "pytorch", "tags": ["code", "deepseek", "gguf", "f32", "f16", "q2", "q8", "q6", "q4_k_m", "humaneval"], "pipeline_tag": "text-generation"} | leafspark/wavecoder-ds-6.7b-GGUF | null | [
"pytorch",
"gguf",
"code",
"deepseek",
"f32",
"f16",
"q2",
"q8",
"q6",
"q4_k_m",
"humaneval",
"text-generation",
"en",
"license:mit",
"region:us"
] | null | 2024-04-16T05:00:10+00:00 | [] | [
"en"
] | TAGS
#pytorch #gguf #code #deepseek #f32 #f16 #q2 #q8 #q6 #q4_k_m #humaneval #text-generation #en #license-mit #region-us
|
# Model Card for wavecoder-ds-6.7b-GGUF
WaveCoder is a series of large language models (LLMs) for the coding domain.
## Model Details
- WaveCoder-6.7b-ds = Trained using CodeOcean dataset
- WaveCoder-6.7b-pro = Trained using GPT-4 synthetic data
- WaveCoder-6.7b-ultra = Trained using enhanced GPT-4 synthetic data
### Model Description
WaveCoder is a series of large language models (LLMs) for the coding domain, designed to solve relevant problems in the field of code through instruction-following learning. Its training dataset was generated from a subset of code-search-net data using a generator-discriminator framework based on LLMs that we proposed, covering four general code-related tasks: code generation, code summary, code translation, and code repair.
- Developed by: Yu, Zhaojian and Zhang, Xin and Shang, Ning and Huang, Yangyu and Xu, Can and Zhao, Yishujie and Hu, Wenxiang and Yin, Qiufeng
- Model type: Large Language Model
- Language(s) (NLP): English
- License: DeepSeek License (Model)
### Model Sources
- Repository: URL
- Paper :
- Demo :
## Uses
Coding/Refactoring/Cleanup/Fixing Code
## Original: URL | [
"# Model Card for wavecoder-ds-6.7b-GGUF\n\nWaveCoder is a series of large language models (LLMs) for the coding domain.",
"## Model Details\n\n- WaveCoder-6.7b-ds = Trained using CodeOcean dataset\n- WaveCoder-6.7b-pro = Trained using GPT-4 synthetic data\n- WaveCoder-6.7b-ultra = Trained using enhanced GPT-4 synthetic data",
"### Model Description\n\nWaveCoder is a series of large language models (LLMs) for the coding domain, designed to solve relevant problems in the field of code through instruction-following learning. Its training dataset was generated from a subset of code-search-net data using a generator-discriminator framework based on LLMs that we proposed, covering four general code-related tasks: code generation, code summary, code translation, and code repair.\n\n- Developed by: Yu, Zhaojian and Zhang, Xin and Shang, Ning and Huang, Yangyu and Xu, Can and Zhao, Yishujie and Hu, Wenxiang and Yin, Qiufeng\n- Model type: Large Language Model\n- Language(s) (NLP): English\n- License: DeepSeek License (Model)",
"### Model Sources\n\n- Repository: URL\n- Paper : \n- Demo :",
"## Uses\n\nCoding/Refactoring/Cleanup/Fixing Code",
"## Original: URL"
] | [
"TAGS\n#pytorch #gguf #code #deepseek #f32 #f16 #q2 #q8 #q6 #q4_k_m #humaneval #text-generation #en #license-mit #region-us \n",
"# Model Card for wavecoder-ds-6.7b-GGUF\n\nWaveCoder is a series of large language models (LLMs) for the coding domain.",
"## Model Details\n\n- WaveCoder-6.7b-ds = Trained using CodeOcean dataset\n- WaveCoder-6.7b-pro = Trained using GPT-4 synthetic data\n- WaveCoder-6.7b-ultra = Trained using enhanced GPT-4 synthetic data",
"### Model Description\n\nWaveCoder is a series of large language models (LLMs) for the coding domain, designed to solve relevant problems in the field of code through instruction-following learning. Its training dataset was generated from a subset of code-search-net data using a generator-discriminator framework based on LLMs that we proposed, covering four general code-related tasks: code generation, code summary, code translation, and code repair.\n\n- Developed by: Yu, Zhaojian and Zhang, Xin and Shang, Ning and Huang, Yangyu and Xu, Can and Zhao, Yishujie and Hu, Wenxiang and Yin, Qiufeng\n- Model type: Large Language Model\n- Language(s) (NLP): English\n- License: DeepSeek License (Model)",
"### Model Sources\n\n- Repository: URL\n- Paper : \n- Demo :",
"## Uses\n\nCoding/Refactoring/Cleanup/Fixing Code",
"## Original: URL"
] |
null | null |
# DavidAU/Venomia-m7-Q6_K-GGUF
This model was converted to GGUF format from [`Sao10K/Venomia-m7`](https://huggingface.co/Sao10K/Venomia-m7) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sao10K/Venomia-m7) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Venomia-m7-Q6_K-GGUF --model venomia-m7.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Venomia-m7-Q6_K-GGUF --model venomia-m7.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m venomia-m7.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["llama-cpp", "gguf-my-repo"]} | DavidAU/Venomia-m7-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-16T05:00:13+00:00 | [] | [
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #en #license-cc-by-nc-4.0 #region-us
|
# DavidAU/Venomia-m7-Q6_K-GGUF
This model was converted to GGUF format from 'Sao10K/Venomia-m7' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Venomia-m7-Q6_K-GGUF\nThis model was converted to GGUF format from 'Sao10K/Venomia-m7' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #en #license-cc-by-nc-4.0 #region-us \n",
"# DavidAU/Venomia-m7-Q6_K-GGUF\nThis model was converted to GGUF format from 'Sao10K/Venomia-m7' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 | {"library_name": "peft", "base_model": "guanaco_Llama-2-7b-chat-hf_freeze_embed_tokens"} | tricktreat/Llama-2-7b-chat-hf-guanaco-freeze-embed-tokens-prompttuning | null | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:guanaco_Llama-2-7b-chat-hf_freeze_embed_tokens",
"region:us"
] | null | 2024-04-16T05:00:49+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-guanaco_Llama-2-7b-chat-hf_freeze_embed_tokens #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.1.dev0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] | [
"TAGS\n#peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-guanaco_Llama-2-7b-chat-hf_freeze_embed_tokens #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
text-generation | transformers |
---
## Developed by :
* K2S3
## Model Number:
* K2S3-Mistral-7b-v1.50
## Base Model :
* mistralai/Mistral-7B-v0.1
### Training Data
* The training data for this model includes alpaca-gpt4-data, and samples from The OpenOrca Dataset.
* 이 모델의 훈련 데이터에는 alpaca-gpt4-data, 그리고 OpenOrca Dataset에서 제공한 샘플들이 포함됩니다.
### Training Method
* This model was fine-tuned on the "mistralai/Mistral-7B-v0.1" base model using a full parameter tuning method with SFT (Supervised Fine-Tuning).
* 이 모델은 "mistralai/Mistral-7B-v0.1" 기반 모델을 SFT를 사용하여 전체 파라미터 조정 방법으로 미세조정되었습니다.
### Hardware
* Hardware: Utilized two A100 (80G*2EA) GPUs for training.
* Training Factors: This model was fine-tuned with SFT, using the HuggingFace SFTtrainer and applied fsdp.
* 이 모델은 SFT를 사용하여 HuggingFace SFTtrainer와 fsdp를 적용하여 미세조정되었습니다. | {"language": ["en", "ko"], "license": "cc-by-nc-4.0"} | Changgil/K2S3-Mistral-7b-v1.50 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T05:03:11+00:00 | [] | [
"en",
"ko"
] | TAGS
#transformers #safetensors #mistral #text-generation #en #ko #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
---
## Developed by :
* K2S3
## Model Number:
* K2S3-Mistral-7b-v1.50
## Base Model :
* mistralai/Mistral-7B-v0.1
### Training Data
* The training data for this model includes alpaca-gpt4-data, and samples from The OpenOrca Dataset.
* 이 모델의 훈련 데이터에는 alpaca-gpt4-data, 그리고 OpenOrca Dataset에서 제공한 샘플들이 포함됩니다.
### Training Method
* This model was fine-tuned on the "mistralai/Mistral-7B-v0.1" base model using a full parameter tuning method with SFT (Supervised Fine-Tuning).
* 이 모델은 "mistralai/Mistral-7B-v0.1" 기반 모델을 SFT를 사용하여 전체 파라미터 조정 방법으로 미세조정되었습니다.
### Hardware
* Hardware: Utilized two A100 (80G*2EA) GPUs for training.
* Training Factors: This model was fine-tuned with SFT, using the HuggingFace SFTtrainer and applied fsdp.
* 이 모델은 SFT를 사용하여 HuggingFace SFTtrainer와 fsdp를 적용하여 미세조정되었습니다. | [
"## Developed by : \n* K2S3",
"## Model Number:\n* K2S3-Mistral-7b-v1.50",
"## Base Model : \n* mistralai/Mistral-7B-v0.1",
"### Training Data\n* The training data for this model includes alpaca-gpt4-data, and samples from The OpenOrca Dataset.\n* 이 모델의 훈련 데이터에는 alpaca-gpt4-data, 그리고 OpenOrca Dataset에서 제공한 샘플들이 포함됩니다.",
"### Training Method\n* This model was fine-tuned on the \"mistralai/Mistral-7B-v0.1\" base model using a full parameter tuning method with SFT (Supervised Fine-Tuning).\n* 이 모델은 \"mistralai/Mistral-7B-v0.1\" 기반 모델을 SFT를 사용하여 전체 파라미터 조정 방법으로 미세조정되었습니다.",
"### Hardware\n* Hardware: Utilized two A100 (80G*2EA) GPUs for training.\n* Training Factors: This model was fine-tuned with SFT, using the HuggingFace SFTtrainer and applied fsdp. \n* 이 모델은 SFT를 사용하여 HuggingFace SFTtrainer와 fsdp를 적용하여 미세조정되었습니다."
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #en #ko #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## Developed by : \n* K2S3",
"## Model Number:\n* K2S3-Mistral-7b-v1.50",
"## Base Model : \n* mistralai/Mistral-7B-v0.1",
"### Training Data\n* The training data for this model includes alpaca-gpt4-data, and samples from The OpenOrca Dataset.\n* 이 모델의 훈련 데이터에는 alpaca-gpt4-data, 그리고 OpenOrca Dataset에서 제공한 샘플들이 포함됩니다.",
"### Training Method\n* This model was fine-tuned on the \"mistralai/Mistral-7B-v0.1\" base model using a full parameter tuning method with SFT (Supervised Fine-Tuning).\n* 이 모델은 \"mistralai/Mistral-7B-v0.1\" 기반 모델을 SFT를 사용하여 전체 파라미터 조정 방법으로 미세조정되었습니다.",
"### Hardware\n* Hardware: Utilized two A100 (80G*2EA) GPUs for training.\n* Training Factors: This model was fine-tuned with SFT, using the HuggingFace SFTtrainer and applied fsdp. \n* 이 모델은 SFT를 사용하여 HuggingFace SFTtrainer와 fsdp를 적용하여 미세조정되었습니다."
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0127
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0666 | 1.54 | 100 | 0.0324 | 0.9925 |
| 0.0164 | 3.08 | 200 | 0.0127 | 1.0 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["image-classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "vit-base-beans", "results": []}]} | cogsci13/vit-base-beans | null | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T05:03:48+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| vit-base-beans
==============
This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the beans dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0127
* Accuracy: 1.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | null |
# DavidAU/Winterreise-m7-Q6_K-GGUF
This model was converted to GGUF format from [`Sao10K/Winterreise-m7`](https://huggingface.co/Sao10K/Winterreise-m7) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sao10K/Winterreise-m7) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Winterreise-m7-Q6_K-GGUF --model winterreise-m7.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Winterreise-m7-Q6_K-GGUF --model winterreise-m7.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m winterreise-m7.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["LDJnr/Capybara", "chargoddard/rpguild", "PocketDoc/Guanaco-Unchained-Refined", "lemonilia/LimaRP"]} | DavidAU/Winterreise-m7-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:LDJnr/Capybara",
"dataset:chargoddard/rpguild",
"dataset:PocketDoc/Guanaco-Unchained-Refined",
"dataset:lemonilia/LimaRP",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-16T05:03:50+00:00 | [] | [
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #en #dataset-LDJnr/Capybara #dataset-chargoddard/rpguild #dataset-PocketDoc/Guanaco-Unchained-Refined #dataset-lemonilia/LimaRP #license-cc-by-nc-4.0 #region-us
|
# DavidAU/Winterreise-m7-Q6_K-GGUF
This model was converted to GGUF format from 'Sao10K/Winterreise-m7' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Winterreise-m7-Q6_K-GGUF\nThis model was converted to GGUF format from 'Sao10K/Winterreise-m7' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #en #dataset-LDJnr/Capybara #dataset-chargoddard/rpguild #dataset-PocketDoc/Guanaco-Unchained-Refined #dataset-lemonilia/LimaRP #license-cc-by-nc-4.0 #region-us \n",
"# DavidAU/Winterreise-m7-Q6_K-GGUF\nThis model was converted to GGUF format from 'Sao10K/Winterreise-m7' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null | # Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"language": ["en"], "license": "mit", "datasets": ["m-a-p/COIG-CQIA"]} | Dfgystile/rxt | null | [
"en",
"dataset:m-a-p/COIG-CQIA",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | null | 2024-04-16T05:03:54+00:00 | [
"1910.09700"
] | [
"en"
] | TAGS
#en #dataset-m-a-p/COIG-CQIA #arxiv-1910.09700 #license-mit #region-us
| # Model Card for Model ID
This modelcard aims to be a base template for new models. It has been generated using this raw template.
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#en #dataset-m-a-p/COIG-CQIA #arxiv-1910.09700 #license-mit #region-us \n",
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 | {"library_name": "peft", "base_model": "guanaco_Llama-2-7b-chat-hf_freeze_q_v_proj"} | tricktreat/Llama-2-7b-chat-hf-guanaco-freeze-q-v-proj-lora | null | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:guanaco_Llama-2-7b-chat-hf_freeze_q_v_proj",
"region:us"
] | null | 2024-04-16T05:06:11+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-guanaco_Llama-2-7b-chat-hf_freeze_q_v_proj #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.1.dev0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] | [
"TAGS\n#peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-guanaco_Llama-2-7b-chat-hf_freeze_q_v_proj #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
text-generation | transformers |
<p style="font-size:20px;" align="center">
🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News 🔥🔥🔥 [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper.
## Model Details
* **Model name**: WizardLM-2 8x22B
* **Developed by**: WizardLM@Microsoft AI
* **Model type**: Mixture of Experts (MoE)
* **Base model**: [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1)
* **Parameters**: 141B
* **Language(s)**: Multilingual
* **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2)
* **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
* **Paper**: WizardLM-2 (Upcoming)
* **License**: Apache2.0
## Model Capacities
**MT-Bench**
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
**Human Preferences Evaluation**
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
❗<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
| {"license": "apache-2.0"} | dreamgen/WizardLM-2-8x22B | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T05:07:04+00:00 | [
"2304.12244",
"2306.08568",
"2308.09583"
] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<p style="font-size:20px;" align="center">
<a href="URL target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
<a href="URL target="_blank">HF Repo</a> • <a href="URL target="_blank">Github Repo</a> • <a href="URL target="_blank">Twitter</a> • <a href="URL target="_blank">[WizardLM]</a> • <a href="URL target="_blank">[WizardCoder]</a> • <a href="URL target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
Join our <a href="URL target="_blank">Discord</a>
</p>
## News [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our release blog post and upcoming paper.
## Model Details
* Model name: WizardLM-2 8x22B
* Developed by: WizardLM@Microsoft AI
* Model type: Mixture of Experts (MoE)
* Base model: mistral-community/Mixtral-8x22B-v0.1
* Parameters: 141B
* Language(s): Multilingual
* Blog: Introducing WizardLM-2
* Repository: URL
* Paper: WizardLM-2 (Upcoming)
* License: Apache2.0
## Model Capacities
MT-Bench
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="URL alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Human Preferences Evaluation
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="URL alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.
<p align="center" width="100%">
<a ><img src="URL alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo code on our github.
| [
"## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.",
"## Model Details\n\n* Model name: WizardLM-2 8x22B\n* Developed by: WizardLM@Microsoft AI\n* Model type: Mixture of Experts (MoE)\n* Base model: mistral-community/Mixtral-8x22B-v0.1\n* Parameters: 141B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0",
"## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github."
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.",
"## Model Details\n\n* Model name: WizardLM-2 8x22B\n* Developed by: WizardLM@Microsoft AI\n* Model type: Mixture of Experts (MoE)\n* Base model: mistral-community/Mixtral-8x22B-v0.1\n* Parameters: 141B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0",
"## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github."
] |
text-generation | transformers |
<p style="font-size:20px;" align="center">
🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News 🔥🔥🔥 [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper.
## Model Details
* **Model name**: WizardLM-2 8x22B
* **Developed by**: WizardLM@Microsoft AI
* **Model type**: Mixture of Experts (MoE)
* **Base model**: [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1)
* **Parameters**: 141B
* **Language(s)**: Multilingual
* **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2)
* **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
* **Paper**: WizardLM-2 (Upcoming)
* **License**: Apache2.0
## Model Capacities
**MT-Bench**
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
**Human Preferences Evaluation**
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
❗<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
| {"license": "apache-2.0"} | dreamgen-preview/WizardLM-2-8x22B | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T05:07:26+00:00 | [
"2304.12244",
"2306.08568",
"2308.09583"
] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<p style="font-size:20px;" align="center">
<a href="URL target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
<a href="URL target="_blank">HF Repo</a> • <a href="URL target="_blank">Github Repo</a> • <a href="URL target="_blank">Twitter</a> • <a href="URL target="_blank">[WizardLM]</a> • <a href="URL target="_blank">[WizardCoder]</a> • <a href="URL target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
Join our <a href="URL target="_blank">Discord</a>
</p>
## News [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our release blog post and upcoming paper.
## Model Details
* Model name: WizardLM-2 8x22B
* Developed by: WizardLM@Microsoft AI
* Model type: Mixture of Experts (MoE)
* Base model: mistral-community/Mixtral-8x22B-v0.1
* Parameters: 141B
* Language(s): Multilingual
* Blog: Introducing WizardLM-2
* Repository: URL
* Paper: WizardLM-2 (Upcoming)
* License: Apache2.0
## Model Capacities
MT-Bench
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="URL alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Human Preferences Evaluation
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="URL alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.
<p align="center" width="100%">
<a ><img src="URL alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo code on our github.
| [
"## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.",
"## Model Details\n\n* Model name: WizardLM-2 8x22B\n* Developed by: WizardLM@Microsoft AI\n* Model type: Mixture of Experts (MoE)\n* Base model: mistral-community/Mixtral-8x22B-v0.1\n* Parameters: 141B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0",
"## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github."
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.",
"## Model Details\n\n* Model name: WizardLM-2 8x22B\n* Developed by: WizardLM@Microsoft AI\n* Model type: Mixture of Experts (MoE)\n* Base model: mistral-community/Mixtral-8x22B-v0.1\n* Parameters: 141B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0",
"## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_8192_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5626
- F1 Score: 0.6122
- Accuracy: 0.6123
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5919 | 50.0 | 200 | 0.7957 | 0.6165 | 0.6173 |
| 0.3814 | 100.0 | 400 | 1.0433 | 0.6258 | 0.6259 |
| 0.2593 | 150.0 | 600 | 1.2054 | 0.6196 | 0.6198 |
| 0.188 | 200.0 | 800 | 1.4103 | 0.6156 | 0.6173 |
| 0.1498 | 250.0 | 1000 | 1.4456 | 0.6194 | 0.6198 |
| 0.1244 | 300.0 | 1200 | 1.6041 | 0.6401 | 0.6407 |
| 0.1054 | 350.0 | 1400 | 1.6627 | 0.6260 | 0.6259 |
| 0.0921 | 400.0 | 1600 | 1.9471 | 0.6250 | 0.6259 |
| 0.0782 | 450.0 | 1800 | 1.8645 | 0.6262 | 0.6272 |
| 0.0717 | 500.0 | 2000 | 1.9689 | 0.6296 | 0.6296 |
| 0.0638 | 550.0 | 2200 | 2.0158 | 0.6306 | 0.6309 |
| 0.059 | 600.0 | 2400 | 2.1445 | 0.6427 | 0.6432 |
| 0.0548 | 650.0 | 2600 | 2.1952 | 0.6383 | 0.6383 |
| 0.0513 | 700.0 | 2800 | 2.2823 | 0.6193 | 0.6210 |
| 0.0502 | 750.0 | 3000 | 2.2090 | 0.6272 | 0.6272 |
| 0.0459 | 800.0 | 3200 | 2.1686 | 0.6359 | 0.6358 |
| 0.0463 | 850.0 | 3400 | 2.2694 | 0.6321 | 0.6321 |
| 0.0416 | 900.0 | 3600 | 2.2953 | 0.6367 | 0.6370 |
| 0.0424 | 950.0 | 3800 | 2.2903 | 0.6268 | 0.6272 |
| 0.0387 | 1000.0 | 4000 | 2.2178 | 0.6272 | 0.6272 |
| 0.0371 | 1050.0 | 4200 | 2.3688 | 0.6382 | 0.6383 |
| 0.0361 | 1100.0 | 4400 | 2.4424 | 0.6358 | 0.6358 |
| 0.0357 | 1150.0 | 4600 | 2.3318 | 0.6332 | 0.6333 |
| 0.0325 | 1200.0 | 4800 | 2.3164 | 0.6444 | 0.6444 |
| 0.0334 | 1250.0 | 5000 | 2.2451 | 0.6221 | 0.6222 |
| 0.0321 | 1300.0 | 5200 | 2.3705 | 0.6393 | 0.6395 |
| 0.0314 | 1350.0 | 5400 | 2.2540 | 0.6309 | 0.6309 |
| 0.0296 | 1400.0 | 5600 | 2.3779 | 0.6371 | 0.6370 |
| 0.0296 | 1450.0 | 5800 | 2.3859 | 0.6358 | 0.6358 |
| 0.0288 | 1500.0 | 6000 | 2.3234 | 0.6432 | 0.6432 |
| 0.0284 | 1550.0 | 6200 | 2.3637 | 0.6297 | 0.6296 |
| 0.0263 | 1600.0 | 6400 | 2.3816 | 0.6282 | 0.6284 |
| 0.0259 | 1650.0 | 6600 | 2.3158 | 0.6233 | 0.6235 |
| 0.0247 | 1700.0 | 6800 | 2.3534 | 0.6285 | 0.6284 |
| 0.0241 | 1750.0 | 7000 | 2.4556 | 0.6208 | 0.6210 |
| 0.0237 | 1800.0 | 7200 | 2.5598 | 0.6271 | 0.6272 |
| 0.0233 | 1850.0 | 7400 | 2.4094 | 0.6371 | 0.6370 |
| 0.0232 | 1900.0 | 7600 | 2.3423 | 0.6284 | 0.6296 |
| 0.0236 | 1950.0 | 7800 | 2.2824 | 0.6247 | 0.6247 |
| 0.0226 | 2000.0 | 8000 | 2.4139 | 0.6331 | 0.6333 |
| 0.0218 | 2050.0 | 8200 | 2.3980 | 0.6307 | 0.6309 |
| 0.0205 | 2100.0 | 8400 | 2.4304 | 0.6383 | 0.6383 |
| 0.0205 | 2150.0 | 8600 | 2.4736 | 0.6332 | 0.6333 |
| 0.0208 | 2200.0 | 8800 | 2.2060 | 0.6404 | 0.6407 |
| 0.0194 | 2250.0 | 9000 | 2.4112 | 0.6345 | 0.6346 |
| 0.0201 | 2300.0 | 9200 | 2.5069 | 0.6297 | 0.6296 |
| 0.0195 | 2350.0 | 9400 | 2.4667 | 0.6320 | 0.6321 |
| 0.0189 | 2400.0 | 9600 | 2.3927 | 0.6332 | 0.6333 |
| 0.0193 | 2450.0 | 9800 | 2.4038 | 0.6320 | 0.6321 |
| 0.019 | 2500.0 | 10000 | 2.3925 | 0.6333 | 0.6333 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_0-seqsight_8192_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_0-seqsight_8192_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-16T05:07:52+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_mouse\_0-seqsight\_8192\_512\_30M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 2.5626
* F1 Score: 0.6122
* Accuracy: 0.6123
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
<p style="font-size:20px;" align="center">
🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News 🔥🔥🔥 [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper.
## Model Details
* **Model name**: WizardLM-2 7B
* **Developed by**: WizardLM@Microsoft AI
* **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* **Parameters**: 7B
* **Language(s)**: Multilingual
* **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2)
* **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
* **Paper**: WizardLM-2 (Upcoming)
* **License**: Apache2.0
## Model Capacities
**MT-Bench**
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
**Human Preferences Evaluation**
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
❗<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
| {"license": "apache-2.0"} | dreamgen/WizardLM-2-7B | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T05:08:27+00:00 | [
"2304.12244",
"2306.08568",
"2308.09583"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
<p style="font-size:20px;" align="center">
<a href="URL target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
<a href="URL target="_blank">HF Repo</a> • <a href="URL target="_blank">Github Repo</a> • <a href="URL target="_blank">Twitter</a> • <a href="URL target="_blank">[WizardLM]</a> • <a href="URL target="_blank">[WizardCoder]</a> • <a href="URL target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
Join our <a href="URL target="_blank">Discord</a>
</p>
## News [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our release blog post and upcoming paper.
## Model Details
* Model name: WizardLM-2 7B
* Developed by: WizardLM@Microsoft AI
* Base model: mistralai/Mistral-7B-v0.1
* Parameters: 7B
* Language(s): Multilingual
* Blog: Introducing WizardLM-2
* Repository: URL
* Paper: WizardLM-2 (Upcoming)
* License: Apache2.0
## Model Capacities
MT-Bench
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="URL alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Human Preferences Evaluation
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="URL alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.
<p align="center" width="100%">
<a ><img src="URL alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo code on our github.
| [
"## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.",
"## Model Details\n\n* Model name: WizardLM-2 7B\n* Developed by: WizardLM@Microsoft AI\n* Base model: mistralai/Mistral-7B-v0.1\n* Parameters: 7B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0",
"## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github."
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.",
"## Model Details\n\n* Model name: WizardLM-2 7B\n* Developed by: WizardLM@Microsoft AI\n* Base model: mistralai/Mistral-7B-v0.1\n* Parameters: 7B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0",
"## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_1-seqsight_8192_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5028
- F1 Score: 0.8025
- Accuracy: 0.8030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5556 | 7.41 | 200 | 0.4727 | 0.7654 | 0.7659 |
| 0.4638 | 14.81 | 400 | 0.4459 | 0.7859 | 0.7865 |
| 0.4319 | 22.22 | 600 | 0.4263 | 0.8060 | 0.8061 |
| 0.4062 | 29.63 | 800 | 0.4238 | 0.8073 | 0.8073 |
| 0.3848 | 37.04 | 1000 | 0.4144 | 0.8110 | 0.8111 |
| 0.3667 | 44.44 | 1200 | 0.4121 | 0.8175 | 0.8178 |
| 0.3501 | 51.85 | 1400 | 0.4133 | 0.8216 | 0.8219 |
| 0.3362 | 59.26 | 1600 | 0.4137 | 0.8212 | 0.8213 |
| 0.3239 | 66.67 | 1800 | 0.4125 | 0.8222 | 0.8222 |
| 0.3146 | 74.07 | 2000 | 0.4114 | 0.8237 | 0.8242 |
| 0.304 | 81.48 | 2200 | 0.4224 | 0.8246 | 0.8252 |
| 0.2963 | 88.89 | 2400 | 0.4174 | 0.8216 | 0.8216 |
| 0.2887 | 96.3 | 2600 | 0.4272 | 0.8247 | 0.8249 |
| 0.2815 | 103.7 | 2800 | 0.4209 | 0.8225 | 0.8227 |
| 0.2741 | 111.11 | 3000 | 0.4188 | 0.8207 | 0.8208 |
| 0.2684 | 118.52 | 3200 | 0.4274 | 0.8221 | 0.8222 |
| 0.2597 | 125.93 | 3400 | 0.4432 | 0.8230 | 0.8233 |
| 0.2537 | 133.33 | 3600 | 0.4400 | 0.8246 | 0.8246 |
| 0.2483 | 140.74 | 3800 | 0.4511 | 0.8261 | 0.8264 |
| 0.2425 | 148.15 | 4000 | 0.4531 | 0.8256 | 0.8258 |
| 0.2368 | 155.56 | 4200 | 0.4638 | 0.8239 | 0.8245 |
| 0.233 | 162.96 | 4400 | 0.4506 | 0.8243 | 0.8243 |
| 0.2262 | 170.37 | 4600 | 0.4586 | 0.8263 | 0.8265 |
| 0.2221 | 177.78 | 4800 | 0.4710 | 0.8225 | 0.8227 |
| 0.2161 | 185.19 | 5000 | 0.4695 | 0.8218 | 0.8221 |
| 0.2116 | 192.59 | 5200 | 0.4840 | 0.8191 | 0.8193 |
| 0.2085 | 200.0 | 5400 | 0.4848 | 0.8215 | 0.8216 |
| 0.2025 | 207.41 | 5600 | 0.4964 | 0.8228 | 0.8233 |
| 0.1995 | 214.81 | 5800 | 0.4949 | 0.8227 | 0.8230 |
| 0.195 | 222.22 | 6000 | 0.5058 | 0.8241 | 0.8245 |
| 0.1915 | 229.63 | 6200 | 0.5125 | 0.8194 | 0.8197 |
| 0.1887 | 237.04 | 6400 | 0.5025 | 0.8194 | 0.8196 |
| 0.1871 | 244.44 | 6600 | 0.5105 | 0.8203 | 0.8206 |
| 0.1835 | 251.85 | 6800 | 0.5126 | 0.8195 | 0.8197 |
| 0.1803 | 259.26 | 7000 | 0.5195 | 0.8219 | 0.8221 |
| 0.1777 | 266.67 | 7200 | 0.5362 | 0.8215 | 0.8219 |
| 0.1757 | 274.07 | 7400 | 0.5213 | 0.8189 | 0.8191 |
| 0.1731 | 281.48 | 7600 | 0.5318 | 0.8198 | 0.8200 |
| 0.1718 | 288.89 | 7800 | 0.5266 | 0.8207 | 0.8208 |
| 0.1688 | 296.3 | 8000 | 0.5276 | 0.8188 | 0.8190 |
| 0.167 | 303.7 | 8200 | 0.5362 | 0.8190 | 0.8193 |
| 0.1661 | 311.11 | 8400 | 0.5410 | 0.8211 | 0.8213 |
| 0.1645 | 318.52 | 8600 | 0.5499 | 0.8203 | 0.8206 |
| 0.163 | 325.93 | 8800 | 0.5439 | 0.8212 | 0.8215 |
| 0.1607 | 333.33 | 9000 | 0.5477 | 0.8193 | 0.8196 |
| 0.1605 | 340.74 | 9200 | 0.5486 | 0.8220 | 0.8222 |
| 0.1587 | 348.15 | 9400 | 0.5519 | 0.8214 | 0.8218 |
| 0.1593 | 355.56 | 9600 | 0.5500 | 0.8217 | 0.8219 |
| 0.1592 | 362.96 | 9800 | 0.5488 | 0.8211 | 0.8213 |
| 0.1583 | 370.37 | 10000 | 0.5493 | 0.8212 | 0.8215 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_1-seqsight_8192_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_1-seqsight_8192_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-16T05:08:32+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_mouse\_1-seqsight\_8192\_512\_30M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5028
* F1 Score: 0.8025
* Accuracy: 0.8030
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_4-seqsight_8192_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9522
- F1 Score: 0.5660
- Accuracy: 0.5661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6605 | 25.0 | 200 | 0.6937 | 0.5768 | 0.5884 |
| 0.5575 | 50.0 | 400 | 0.7714 | 0.5816 | 0.5842 |
| 0.4791 | 75.0 | 600 | 0.8610 | 0.5757 | 0.5852 |
| 0.4183 | 100.0 | 800 | 0.9211 | 0.5859 | 0.5858 |
| 0.3801 | 125.0 | 1000 | 0.9956 | 0.5819 | 0.5826 |
| 0.3486 | 150.0 | 1200 | 1.0383 | 0.5696 | 0.5714 |
| 0.3237 | 175.0 | 1400 | 1.0686 | 0.5737 | 0.5736 |
| 0.3032 | 200.0 | 1600 | 1.1618 | 0.5699 | 0.5704 |
| 0.2833 | 225.0 | 1800 | 1.1984 | 0.5688 | 0.5693 |
| 0.2673 | 250.0 | 2000 | 1.1581 | 0.5737 | 0.5736 |
| 0.2515 | 275.0 | 2200 | 1.2605 | 0.5594 | 0.5613 |
| 0.2384 | 300.0 | 2400 | 1.2406 | 0.5652 | 0.5651 |
| 0.2266 | 325.0 | 2600 | 1.2483 | 0.5684 | 0.5693 |
| 0.2154 | 350.0 | 2800 | 1.2792 | 0.5721 | 0.5720 |
| 0.2053 | 375.0 | 3000 | 1.3171 | 0.5646 | 0.5666 |
| 0.1963 | 400.0 | 3200 | 1.3455 | 0.5695 | 0.5698 |
| 0.1888 | 425.0 | 3400 | 1.4330 | 0.5663 | 0.5693 |
| 0.1799 | 450.0 | 3600 | 1.3236 | 0.5673 | 0.5672 |
| 0.1711 | 475.0 | 3800 | 1.4389 | 0.5693 | 0.5693 |
| 0.166 | 500.0 | 4000 | 1.4775 | 0.5610 | 0.5613 |
| 0.1573 | 525.0 | 4200 | 1.4220 | 0.5606 | 0.5613 |
| 0.1525 | 550.0 | 4400 | 1.4482 | 0.5636 | 0.5640 |
| 0.1462 | 575.0 | 4600 | 1.4892 | 0.5688 | 0.5688 |
| 0.1422 | 600.0 | 4800 | 1.5120 | 0.5635 | 0.5635 |
| 0.1359 | 625.0 | 5000 | 1.4702 | 0.5660 | 0.5661 |
| 0.1296 | 650.0 | 5200 | 1.5407 | 0.5641 | 0.5640 |
| 0.1282 | 675.0 | 5400 | 1.5774 | 0.5732 | 0.5730 |
| 0.1232 | 700.0 | 5600 | 1.5713 | 0.5728 | 0.5730 |
| 0.1187 | 725.0 | 5800 | 1.5289 | 0.5639 | 0.5640 |
| 0.1153 | 750.0 | 6000 | 1.6006 | 0.5763 | 0.5762 |
| 0.113 | 775.0 | 6200 | 1.5573 | 0.5666 | 0.5666 |
| 0.1082 | 800.0 | 6400 | 1.5754 | 0.5676 | 0.5682 |
| 0.106 | 825.0 | 6600 | 1.6283 | 0.5696 | 0.5698 |
| 0.1042 | 850.0 | 6800 | 1.6227 | 0.5708 | 0.5714 |
| 0.1021 | 875.0 | 7000 | 1.6072 | 0.5710 | 0.5709 |
| 0.1002 | 900.0 | 7200 | 1.6981 | 0.5695 | 0.5693 |
| 0.0967 | 925.0 | 7400 | 1.6811 | 0.5737 | 0.5736 |
| 0.0938 | 950.0 | 7600 | 1.6874 | 0.5730 | 0.5736 |
| 0.0932 | 975.0 | 7800 | 1.6737 | 0.5689 | 0.5688 |
| 0.0906 | 1000.0 | 8000 | 1.6726 | 0.5708 | 0.5709 |
| 0.0887 | 1025.0 | 8200 | 1.7003 | 0.5698 | 0.5698 |
| 0.0884 | 1050.0 | 8400 | 1.6966 | 0.5737 | 0.5736 |
| 0.0865 | 1075.0 | 8600 | 1.7259 | 0.5699 | 0.5698 |
| 0.0863 | 1100.0 | 8800 | 1.6854 | 0.5700 | 0.5698 |
| 0.0839 | 1125.0 | 9000 | 1.7360 | 0.5721 | 0.5720 |
| 0.083 | 1150.0 | 9200 | 1.7298 | 0.5748 | 0.5746 |
| 0.0819 | 1175.0 | 9400 | 1.7348 | 0.5709 | 0.5709 |
| 0.0812 | 1200.0 | 9600 | 1.7213 | 0.5700 | 0.5698 |
| 0.0811 | 1225.0 | 9800 | 1.7326 | 0.5710 | 0.5709 |
| 0.0805 | 1250.0 | 10000 | 1.7330 | 0.5725 | 0.5725 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_4-seqsight_8192_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_4-seqsight_8192_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-16T05:08:33+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_mouse\_4-seqsight\_8192\_512\_30M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9522
* F1 Score: 0.5660
* Accuracy: 0.5661
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
document-question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-base-uncased_finetuned_passport
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.409 | 2.78 | 50 | 2.3859 |
| 1.959 | 5.56 | 100 | 2.1196 |
| 1.5433 | 8.33 | 150 | 1.7951 |
| 1.1748 | 11.11 | 200 | 1.5172 |
| 0.87 | 13.89 | 250 | 1.3040 |
| 0.6342 | 16.67 | 300 | 1.1525 |
| 0.4597 | 19.44 | 350 | 1.0532 |
| 0.3338 | 22.22 | 400 | 0.9942 |
| 0.2429 | 25.0 | 450 | 0.9632 |
| 0.1786 | 27.78 | 500 | 0.9509 |
| 0.1347 | 30.56 | 550 | 0.9486 |
| 0.143 | 33.33 | 600 | 0.9500 |
| 0.0976 | 36.11 | 650 | 0.9527 |
| 0.0874 | 38.89 | 700 | 0.9556 |
| 0.0808 | 41.67 | 750 | 0.9582 |
| 0.0746 | 44.44 | 800 | 0.9603 |
| 0.073 | 47.22 | 850 | 0.9615 |
| 0.0717 | 50.0 | 900 | 0.9620 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "cc-by-nc-sa-4.0", "tags": ["generated_from_trainer"], "base_model": "microsoft/layoutlmv2-base-uncased", "model-index": [{"name": "layoutlmv2-base-uncased_finetuned_passport", "results": []}]} | EphronM/layoutlmv2-base-uncased_finetuned_passport | null | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"base_model:microsoft/layoutlmv2-base-uncased",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T05:09:45+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #layoutlmv2 #document-question-answering #generated_from_trainer #base_model-microsoft/layoutlmv2-base-uncased #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us
| layoutlmv2-base-uncased\_finetuned\_passport
============================================
This model is a fine-tuned version of microsoft/layoutlmv2-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9620
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #layoutlmv2 #document-question-answering #generated_from_trainer #base_model-microsoft/layoutlmv2-base-uncased #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_3-seqsight_8192_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6646
- F1 Score: 0.6945
- Accuracy: 0.6946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3619 | 200.0 | 200 | 1.3833 | 0.6973 | 0.6987 |
| 0.0453 | 400.0 | 400 | 1.7857 | 0.7153 | 0.7155 |
| 0.0211 | 600.0 | 600 | 1.9566 | 0.7106 | 0.7113 |
| 0.0121 | 800.0 | 800 | 1.9938 | 0.7406 | 0.7406 |
| 0.0087 | 1000.0 | 1000 | 2.1077 | 0.7531 | 0.7531 |
| 0.0065 | 1200.0 | 1200 | 2.1451 | 0.7531 | 0.7531 |
| 0.0053 | 1400.0 | 1400 | 2.1370 | 0.7490 | 0.7490 |
| 0.0044 | 1600.0 | 1600 | 2.3003 | 0.7320 | 0.7322 |
| 0.0039 | 1800.0 | 1800 | 2.3037 | 0.7448 | 0.7448 |
| 0.003 | 2000.0 | 2000 | 2.5266 | 0.7447 | 0.7448 |
| 0.0032 | 2200.0 | 2200 | 2.4774 | 0.7192 | 0.7197 |
| 0.0022 | 2400.0 | 2400 | 2.4663 | 0.7238 | 0.7238 |
| 0.0019 | 2600.0 | 2600 | 2.5395 | 0.7364 | 0.7364 |
| 0.0019 | 2800.0 | 2800 | 2.5716 | 0.7320 | 0.7322 |
| 0.0018 | 3000.0 | 3000 | 2.5063 | 0.7196 | 0.7197 |
| 0.0014 | 3200.0 | 3200 | 2.6278 | 0.7490 | 0.7490 |
| 0.0014 | 3400.0 | 3400 | 2.6045 | 0.7572 | 0.7573 |
| 0.0013 | 3600.0 | 3600 | 2.7386 | 0.7147 | 0.7155 |
| 0.0014 | 3800.0 | 3800 | 2.5384 | 0.7405 | 0.7406 |
| 0.0009 | 4000.0 | 4000 | 2.8440 | 0.7195 | 0.7197 |
| 0.0012 | 4200.0 | 4200 | 2.6927 | 0.7071 | 0.7071 |
| 0.0009 | 4400.0 | 4400 | 2.7829 | 0.7238 | 0.7238 |
| 0.0008 | 4600.0 | 4600 | 2.8514 | 0.7322 | 0.7322 |
| 0.0007 | 4800.0 | 4800 | 2.7974 | 0.7238 | 0.7238 |
| 0.0006 | 5000.0 | 5000 | 2.9477 | 0.7359 | 0.7364 |
| 0.0006 | 5200.0 | 5200 | 3.0076 | 0.7070 | 0.7071 |
| 0.0007 | 5400.0 | 5400 | 2.9671 | 0.7112 | 0.7113 |
| 0.0006 | 5600.0 | 5600 | 2.9265 | 0.7280 | 0.7280 |
| 0.0005 | 5800.0 | 5800 | 2.9105 | 0.6987 | 0.6987 |
| 0.0005 | 6000.0 | 6000 | 3.0270 | 0.7308 | 0.7322 |
| 0.0007 | 6200.0 | 6200 | 2.8516 | 0.7155 | 0.7155 |
| 0.0005 | 6400.0 | 6400 | 2.8789 | 0.7232 | 0.7238 |
| 0.0004 | 6600.0 | 6600 | 3.1223 | 0.7176 | 0.7197 |
| 0.0003 | 6800.0 | 6800 | 3.3147 | 0.7238 | 0.7238 |
| 0.0004 | 7000.0 | 7000 | 3.2076 | 0.7196 | 0.7197 |
| 0.0004 | 7200.0 | 7200 | 2.9898 | 0.7444 | 0.7448 |
| 0.0004 | 7400.0 | 7400 | 3.1094 | 0.7196 | 0.7197 |
| 0.0002 | 7600.0 | 7600 | 3.3229 | 0.7322 | 0.7322 |
| 0.0004 | 7800.0 | 7800 | 3.0860 | 0.7473 | 0.7490 |
| 0.0003 | 8000.0 | 8000 | 3.2034 | 0.6985 | 0.6987 |
| 0.0004 | 8200.0 | 8200 | 2.9285 | 0.7361 | 0.7364 |
| 0.0002 | 8400.0 | 8400 | 3.1690 | 0.7196 | 0.7197 |
| 0.0002 | 8600.0 | 8600 | 3.2931 | 0.7320 | 0.7322 |
| 0.0002 | 8800.0 | 8800 | 3.2642 | 0.7361 | 0.7364 |
| 0.0002 | 9000.0 | 9000 | 3.2619 | 0.7402 | 0.7406 |
| 0.0002 | 9200.0 | 9200 | 3.2664 | 0.7405 | 0.7406 |
| 0.0002 | 9400.0 | 9400 | 3.1945 | 0.7402 | 0.7406 |
| 0.0002 | 9600.0 | 9600 | 3.1598 | 0.7403 | 0.7406 |
| 0.0001 | 9800.0 | 9800 | 3.2188 | 0.7405 | 0.7406 |
| 0.0001 | 10000.0 | 10000 | 3.2184 | 0.7405 | 0.7406 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_3-seqsight_8192_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_3-seqsight_8192_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-16T05:10:13+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_mouse\_3-seqsight\_8192\_512\_30M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 2.6646
* F1 Score: 0.6945
* Accuracy: 0.6946
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | swj0419/email_retrain_STEP0000002 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T05:10:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | yentinglin/Taiwan-LLM-8x7B-DPO-awq | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-16T05:10:32+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b-wo-kqa_golden-sft
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0994 | 0.89 | 6 | 0.9703 |
| 0.9662 | 1.93 | 13 | 0.8242 |
| 0.856 | 2.67 | 18 | 0.8030 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["HuggingFaceH4/deita-10k-v0-sft"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "llama2-7b-wo-kqa_golden-sft", "results": []}]} | Minbyul/llama2-7b-wo-kqa_golden-sft | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T05:11:21+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-meta-llama/Llama-2-7b-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| llama2-7b-wo-kqa\_golden-sft
============================
This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8030
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.1.2
* Datasets 2.14.6
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-meta-llama/Llama-2-7b-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | 12thD/ko-gemma-7b-dpo-sft-v1.1 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T05:13:56+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<p align="center">
<img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true">
</p>
<p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://coder.deepseek.com/">[🤖 Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(微信)]</a> </p>
<hr>
[AQLM](https://arxiv.org/abs/2401.06118) quantized version of deepseek-coder-6.7b-base model.
Refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM) for more information.
---
### 1. Introduction of Deepseek-Coder-7B-Base-v1.5
Deepseek-Coder-7B-Base-v1.5 is continue pre-trained from Deepseek-LLM 7B on 2T tokens by employing a window size of 4K and next token prediction objective.
- **Home Page:** [DeepSeek](https://deepseek.com/)
- **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder)
- **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/)
### 2. Evaluation Results
<img width="1000px" alt="DeepSeek Coder" src="https://cdn-uploads.huggingface.co/production/uploads/6538815d1bdb3c40db94fbfa/xOtCTW5xdoLCKY4FR6tri.png">
### 3. How to Use
Here give an example of how to use our model.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-7b-base-v1.5", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-7b-base-v1.5", trust_remote_code=True).cuda()
input_text = "#write a quick sort algorithm"
inputs = tokenizer(input_text, return_tensors="pt").cuda()
outputs = model.generate(**inputs, max_length=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### 4. License
This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details.
### 5. Contact
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
| {"license": "other", "license_name": "deepseek-license", "license_link": "LICENSE"} | TechxGenus/deepseek-coder-7b-base-v1.5-AQLM | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2401.06118",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T05:14:22+00:00 | [
"2401.06118"
] | [] | TAGS
#transformers #pytorch #llama #text-generation #arxiv-2401.06118 #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<p align="center">
<img width="1000px" alt="DeepSeek Coder" src="URL
</p>
<p align="center"><a href="URL | <a href="URL Chat with DeepSeek Coder]</a> | <a href="URL | <a href="URL(微信)]</a> </p>
<hr>
AQLM quantized version of deepseek-coder-6.7b-base model.
Refer to the official GitHub repo for more information.
---
### 1. Introduction of Deepseek-Coder-7B-Base-v1.5
Deepseek-Coder-7B-Base-v1.5 is continue pre-trained from Deepseek-LLM 7B on 2T tokens by employing a window size of 4K and next token prediction objective.
- Home Page: DeepSeek
- Repository: deepseek-ai/deepseek-coder
- Chat With DeepSeek Coder: DeepSeek-Coder
### 2. Evaluation Results
<img width="1000px" alt="DeepSeek Coder" src="URL
### 3. How to Use
Here give an example of how to use our model.
### 4. License
This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.
See the LICENSE-MODEL for more details.
### 5. Contact
If you have any questions, please raise an issue or contact us at service@URL.
| [
"### 1. Introduction of Deepseek-Coder-7B-Base-v1.5\n\nDeepseek-Coder-7B-Base-v1.5 is continue pre-trained from Deepseek-LLM 7B on 2T tokens by employing a window size of 4K and next token prediction objective.\n\n- Home Page: DeepSeek\n- Repository: deepseek-ai/deepseek-coder\n- Chat With DeepSeek Coder: DeepSeek-Coder",
"### 2. Evaluation Results\n<img width=\"1000px\" alt=\"DeepSeek Coder\" src=\"URL",
"### 3. How to Use\nHere give an example of how to use our model.",
"### 4. License\nThis code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.\n\nSee the LICENSE-MODEL for more details.",
"### 5. Contact\n\nIf you have any questions, please raise an issue or contact us at service@URL."
] | [
"TAGS\n#transformers #pytorch #llama #text-generation #arxiv-2401.06118 #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### 1. Introduction of Deepseek-Coder-7B-Base-v1.5\n\nDeepseek-Coder-7B-Base-v1.5 is continue pre-trained from Deepseek-LLM 7B on 2T tokens by employing a window size of 4K and next token prediction objective.\n\n- Home Page: DeepSeek\n- Repository: deepseek-ai/deepseek-coder\n- Chat With DeepSeek Coder: DeepSeek-Coder",
"### 2. Evaluation Results\n<img width=\"1000px\" alt=\"DeepSeek Coder\" src=\"URL",
"### 3. How to Use\nHere give an example of how to use our model.",
"### 4. License\nThis code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.\n\nSee the LICENSE-MODEL for more details.",
"### 5. Contact\n\nIf you have any questions, please raise an issue or contact us at service@URL."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruBert-base-sberquad-0.01-len_2-filtered-v2
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.01-len_2-filtered-v2", "results": []}]} | Shalazary/ruBert-base-sberquad-0.01-len_2-filtered-v2 | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ai-forever/ruBert-base",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T05:14:32+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
|
# ruBert-base-sberquad-0.01-len_2-filtered-v2
This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# ruBert-base-sberquad-0.01-len_2-filtered-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n",
"# ruBert-base-sberquad-0.01-len_2-filtered-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-to-image | diffusers |
## Majicmix-lux
<img src="" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This checkpoint model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details -
[](https://imagepipeline.io/models/Majicmix-lux?id=ccd867a7-ee2b-49a9-9387-ef2f17133a21/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sd/text2image/v1/run"
payload = json.dumps({
"model_id": "ccd867a7-ee2b-49a9-9387-ef2f17133a21",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "",
"lora_weights": ""
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sd/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
| {"license": "creativeml-openrail-m", "tags": ["imagepipeline", "imagepipeline.io", "text-to-image", "ultra-realistic"], "pinned": false, "pipeline_tag": "text-to-image"} | imagepipeline/Majicmix-lux | null | [
"diffusers",
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-16T05:16:20+00:00 | [] | [] | TAGS
#diffusers #imagepipeline #imagepipeline.io #text-to-image #ultra-realistic #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
| Majicmix-lux
------------
![Generated on Image Pipeline]()
This checkpoint model is uploaded on URL
Model details -
 on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "donut-ktp-v1", "results": []}]} | quissuiven/donut-ktp-v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T05:17:48+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
|
# donut-ktp-v1
This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# donut-ktp-v1\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n",
"# donut-ktp-v1\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | null |
# DavidAU/NyakuraV2-34B-Yi-Llama-Q6_K-GGUF
This model was converted to GGUF format from [`Sao10K/NyakuraV2-34B-Yi-Llama`](https://huggingface.co/Sao10K/NyakuraV2-34B-Yi-Llama) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sao10K/NyakuraV2-34B-Yi-Llama) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/NyakuraV2-34B-Yi-Llama-Q6_K-GGUF --model nyakurav2-34b-yi-llama.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/NyakuraV2-34B-Yi-Llama-Q6_K-GGUF --model nyakurav2-34b-yi-llama.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m nyakurav2-34b-yi-llama.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["llama-cpp", "gguf-my-repo"]} | DavidAU/NyakuraV2-34B-Yi-Llama-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-16T05:18:23+00:00 | [] | [
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #en #license-cc-by-nc-4.0 #region-us
|
# DavidAU/NyakuraV2-34B-Yi-Llama-Q6_K-GGUF
This model was converted to GGUF format from 'Sao10K/NyakuraV2-34B-Yi-Llama' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/NyakuraV2-34B-Yi-Llama-Q6_K-GGUF\nThis model was converted to GGUF format from 'Sao10K/NyakuraV2-34B-Yi-Llama' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #en #license-cc-by-nc-4.0 #region-us \n",
"# DavidAU/NyakuraV2-34B-Yi-Llama-Q6_K-GGUF\nThis model was converted to GGUF format from 'Sao10K/NyakuraV2-34B-Yi-Llama' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DialogLED-base-16384-icsi-finetuned-10epochs
This model is a fine-tuned version of [MingZhong/DialogLED-base-16384](https://huggingface.co/MingZhong/DialogLED-base-16384) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "MingZhong/DialogLED-base-16384", "model-index": [{"name": "DialogLED-base-16384-icsi-finetuned-10epochs", "results": []}]} | StDestiny/DialogLED-base-16384-icsi-finetuned-10epochs | null | [
"transformers",
"tensorboard",
"safetensors",
"led",
"text2text-generation",
"generated_from_trainer",
"base_model:MingZhong/DialogLED-base-16384",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T05:19:53+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #led #text2text-generation #generated_from_trainer #base_model-MingZhong/DialogLED-base-16384 #autotrain_compatible #endpoints_compatible #region-us
|
# DialogLED-base-16384-icsi-finetuned-10epochs
This model is a fine-tuned version of MingZhong/DialogLED-base-16384 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# DialogLED-base-16384-icsi-finetuned-10epochs\n\nThis model is a fine-tuned version of MingZhong/DialogLED-base-16384 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 50",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #led #text2text-generation #generated_from_trainer #base_model-MingZhong/DialogLED-base-16384 #autotrain_compatible #endpoints_compatible #region-us \n",
"# DialogLED-base-16384-icsi-finetuned-10epochs\n\nThis model is a fine-tuned version of MingZhong/DialogLED-base-16384 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 50",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | swj0419/email_retrain_STEP0000004 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T05:20:19+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
token-classification | transformers |
# Financial Entity Identification through NER and DistilBERT
## 1. Loading Dataset
The dataset used in this project is obtained from the Hugging Face library, named `nlpaueb/finer-139`. It contains annotated data for named entity recognition tasks.
## 2. Dataset Size Reduction
Due to the large size of the dataset, we reduce it to a manageable size to achieve good accuracy during training. This step involves selecting a subset of the data for training, validation, and testing.
## 3. Map Indices to Tags and Vice Versa
This section involves mapping indices to NER tag names and vice versa. These mappings are essential for converting between numerical indices and string representations of NER tags.
## 4. Mapping Encoded NER Tags to String Representations
Here, we convert the encoded NER tags to their string representations to facilitate better understanding and interpretation of the data.
## 5. Loading a Pre-trained Tokenizer
We load a pre-trained tokenizer, `DistilBERT`, from the Hugging Face Transformers library. The tokenizer is essential for tokenizing the input text data, which is a crucial step in NER tasks.
## 6. Align Labels with Tokens
This section describes the process of aligning labels with tokens in tokenized sequences. It ensures that each label corresponds accurately to its respective token in the tokenized input sequence.
## 7. Create Batches of Tokenized Input Data
We use a `DataCollatorForTokenClassification` to create batches of tokenized input data for token classification tasks. This step prepares the data for training and evaluation of NER models.
## 8. Evaluation Metrics
Here, we install and use the `seqeval` library to compute evaluation metrics such as precision, recall, F1 score, and accuracy for evaluating the performance of NER models.
## 9. Setup Data Pipeline for Checkpointing
We set up a data pipeline to save all weights and model parameters in a folder for deployment on Hugging Face.
## 10. Define Model
We define the NER model using `AutoModelForTokenClassification` from the Hugging Face Transformers library. The model is initialized with pre-trained weights and configured for token classification tasks.
## 11. Setting up Training Arguments
This section involves setting up training arguments such as learning rate, number of training epochs, and weight decay for training the NER model.
## 12. Training the Model
We train the NER model using the defined model, training arguments, data collator, tokenizer, and evaluation metrics.
## 13. Deployment and Conclusion
The final section concludes the project, mentioning the training duration, achieved accuracy, and deployment on Hugging Face. It also outlines any further steps or observations. | {"license": "mit"} | AnirudhLanka2002/finguard_distilBERT_37500 | null | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T05:21:03+00:00 | [] | [] | TAGS
#transformers #safetensors #distilbert #token-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# Financial Entity Identification through NER and DistilBERT
## 1. Loading Dataset
The dataset used in this project is obtained from the Hugging Face library, named 'nlpaueb/finer-139'. It contains annotated data for named entity recognition tasks.
## 2. Dataset Size Reduction
Due to the large size of the dataset, we reduce it to a manageable size to achieve good accuracy during training. This step involves selecting a subset of the data for training, validation, and testing.
## 3. Map Indices to Tags and Vice Versa
This section involves mapping indices to NER tag names and vice versa. These mappings are essential for converting between numerical indices and string representations of NER tags.
## 4. Mapping Encoded NER Tags to String Representations
Here, we convert the encoded NER tags to their string representations to facilitate better understanding and interpretation of the data.
## 5. Loading a Pre-trained Tokenizer
We load a pre-trained tokenizer, 'DistilBERT', from the Hugging Face Transformers library. The tokenizer is essential for tokenizing the input text data, which is a crucial step in NER tasks.
## 6. Align Labels with Tokens
This section describes the process of aligning labels with tokens in tokenized sequences. It ensures that each label corresponds accurately to its respective token in the tokenized input sequence.
## 7. Create Batches of Tokenized Input Data
We use a 'DataCollatorForTokenClassification' to create batches of tokenized input data for token classification tasks. This step prepares the data for training and evaluation of NER models.
## 8. Evaluation Metrics
Here, we install and use the 'seqeval' library to compute evaluation metrics such as precision, recall, F1 score, and accuracy for evaluating the performance of NER models.
## 9. Setup Data Pipeline for Checkpointing
We set up a data pipeline to save all weights and model parameters in a folder for deployment on Hugging Face.
## 10. Define Model
We define the NER model using 'AutoModelForTokenClassification' from the Hugging Face Transformers library. The model is initialized with pre-trained weights and configured for token classification tasks.
## 11. Setting up Training Arguments
This section involves setting up training arguments such as learning rate, number of training epochs, and weight decay for training the NER model.
## 12. Training the Model
We train the NER model using the defined model, training arguments, data collator, tokenizer, and evaluation metrics.
## 13. Deployment and Conclusion
The final section concludes the project, mentioning the training duration, achieved accuracy, and deployment on Hugging Face. It also outlines any further steps or observations. | [
"# Financial Entity Identification through NER and DistilBERT",
"## 1. Loading Dataset\n\nThe dataset used in this project is obtained from the Hugging Face library, named 'nlpaueb/finer-139'. It contains annotated data for named entity recognition tasks.",
"## 2. Dataset Size Reduction\n\nDue to the large size of the dataset, we reduce it to a manageable size to achieve good accuracy during training. This step involves selecting a subset of the data for training, validation, and testing.",
"## 3. Map Indices to Tags and Vice Versa\n\nThis section involves mapping indices to NER tag names and vice versa. These mappings are essential for converting between numerical indices and string representations of NER tags.",
"## 4. Mapping Encoded NER Tags to String Representations\n\nHere, we convert the encoded NER tags to their string representations to facilitate better understanding and interpretation of the data.",
"## 5. Loading a Pre-trained Tokenizer\n\nWe load a pre-trained tokenizer, 'DistilBERT', from the Hugging Face Transformers library. The tokenizer is essential for tokenizing the input text data, which is a crucial step in NER tasks.",
"## 6. Align Labels with Tokens\n\nThis section describes the process of aligning labels with tokens in tokenized sequences. It ensures that each label corresponds accurately to its respective token in the tokenized input sequence.",
"## 7. Create Batches of Tokenized Input Data\n\nWe use a 'DataCollatorForTokenClassification' to create batches of tokenized input data for token classification tasks. This step prepares the data for training and evaluation of NER models.",
"## 8. Evaluation Metrics\n\nHere, we install and use the 'seqeval' library to compute evaluation metrics such as precision, recall, F1 score, and accuracy for evaluating the performance of NER models.",
"## 9. Setup Data Pipeline for Checkpointing\n\nWe set up a data pipeline to save all weights and model parameters in a folder for deployment on Hugging Face.",
"## 10. Define Model\n\nWe define the NER model using 'AutoModelForTokenClassification' from the Hugging Face Transformers library. The model is initialized with pre-trained weights and configured for token classification tasks.",
"## 11. Setting up Training Arguments\n\nThis section involves setting up training arguments such as learning rate, number of training epochs, and weight decay for training the NER model.",
"## 12. Training the Model\n\nWe train the NER model using the defined model, training arguments, data collator, tokenizer, and evaluation metrics.",
"## 13. Deployment and Conclusion\n\nThe final section concludes the project, mentioning the training duration, achieved accuracy, and deployment on Hugging Face. It also outlines any further steps or observations."
] | [
"TAGS\n#transformers #safetensors #distilbert #token-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# Financial Entity Identification through NER and DistilBERT",
"## 1. Loading Dataset\n\nThe dataset used in this project is obtained from the Hugging Face library, named 'nlpaueb/finer-139'. It contains annotated data for named entity recognition tasks.",
"## 2. Dataset Size Reduction\n\nDue to the large size of the dataset, we reduce it to a manageable size to achieve good accuracy during training. This step involves selecting a subset of the data for training, validation, and testing.",
"## 3. Map Indices to Tags and Vice Versa\n\nThis section involves mapping indices to NER tag names and vice versa. These mappings are essential for converting between numerical indices and string representations of NER tags.",
"## 4. Mapping Encoded NER Tags to String Representations\n\nHere, we convert the encoded NER tags to their string representations to facilitate better understanding and interpretation of the data.",
"## 5. Loading a Pre-trained Tokenizer\n\nWe load a pre-trained tokenizer, 'DistilBERT', from the Hugging Face Transformers library. The tokenizer is essential for tokenizing the input text data, which is a crucial step in NER tasks.",
"## 6. Align Labels with Tokens\n\nThis section describes the process of aligning labels with tokens in tokenized sequences. It ensures that each label corresponds accurately to its respective token in the tokenized input sequence.",
"## 7. Create Batches of Tokenized Input Data\n\nWe use a 'DataCollatorForTokenClassification' to create batches of tokenized input data for token classification tasks. This step prepares the data for training and evaluation of NER models.",
"## 8. Evaluation Metrics\n\nHere, we install and use the 'seqeval' library to compute evaluation metrics such as precision, recall, F1 score, and accuracy for evaluating the performance of NER models.",
"## 9. Setup Data Pipeline for Checkpointing\n\nWe set up a data pipeline to save all weights and model parameters in a folder for deployment on Hugging Face.",
"## 10. Define Model\n\nWe define the NER model using 'AutoModelForTokenClassification' from the Hugging Face Transformers library. The model is initialized with pre-trained weights and configured for token classification tasks.",
"## 11. Setting up Training Arguments\n\nThis section involves setting up training arguments such as learning rate, number of training epochs, and weight decay for training the NER model.",
"## 12. Training the Model\n\nWe train the NER model using the defined model, training arguments, data collator, tokenizer, and evaluation metrics.",
"## 13. Deployment and Conclusion\n\nThe final section concludes the project, mentioning the training duration, achieved accuracy, and deployment on Hugging Face. It also outlines any further steps or observations."
] |
text-classification | transformers |
# 텍스트 기반 발화자 인식 모델
텍스트, 특히 소설과 같이 인용 부호로 발화가 기록되어 있는 텍스트에서 대화 참여 인물을 인식하고, 타겟 인용문의 발화자를 추출하는 모델입니다.
발화자 인식은 다음의 순서로 이루어집니다.
1. 대화 참여자 인식: 전 텍스트에 걸쳐, NER을 통해 모든 대화 참여자를 인식합니다.
2. 인용문 별 인스턴스 생성: window 사이즈는 조절 가능하며, 인용문 기준 앞뒤 각각 10줄을 디폴트로 합니다. 즉 인스턴스는 21줄이 기본입니다.
3. 인용문 별 발화자 추론: 인스턴스 내에서 발화자 후보를 추리고, 발화자를 추론합니다.
`GitHub Repository`: https://github.com/Novel-Transformation-for-You/service.git

### 팀원
|||||||
|:-:|:-:|:-:|:-:|:-:|:-:|
|<a href="https://github.com/kangminhyeok02"><img src="https://avatars.githubusercontent.com/u/110012174?v=4" width="100" height="100"></a>|<a href="https://github.com/duwjd"><img src="https://avatars.githubusercontent.com/u/31474225?v=4" width="100" height="100"></a>|<a href="https://github.com/yuneun92"><img src="https://avatars.githubusercontent.com/u/101092482?v=4" width="100" height="100"></a>|<a href="https://github.com/Hawon-Lee"><img src="https://avatars.githubusercontent.com/u/136240081?v=4" width="100" height="100"></a>|<a href="https://github.com/07LEE"><img src="https://avatars.githubusercontent.com/u/95900411?v=4" width="100" height="100"></a>|<a href="https://github.com/SeungHoJUN"><img src="https://avatars.githubusercontent.com/u/89953442?v=4" width="100" height="100"></a>|
### 시스템 구성도

### 모델 정보
1. Base model: koRoBERTa
2. NER model: 한국해양대학교 NER 모델 (MIT License) https://github.com/kmounlp/NER
```
max_seq_length = 512
``` | {"language": ["ko"], "license": "mit", "library_name": "transformers", "pipeline_tag": "text-classification"} | yuneun92/koCSN_SAPR | null | [
"transformers",
"text-classification",
"ko",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T05:21:22+00:00 | [] | [
"ko"
] | TAGS
#transformers #text-classification #ko #license-mit #endpoints_compatible #region-us
| 텍스트 기반 발화자 인식 모델
================
텍스트, 특히 소설과 같이 인용 부호로 발화가 기록되어 있는 텍스트에서 대화 참여 인물을 인식하고, 타겟 인용문의 발화자를 추출하는 모델입니다.
발화자 인식은 다음의 순서로 이루어집니다.
1. 대화 참여자 인식: 전 텍스트에 걸쳐, NER을 통해 모든 대화 참여자를 인식합니다.
2. 인용문 별 인스턴스 생성: window 사이즈는 조절 가능하며, 인용문 기준 앞뒤 각각 10줄을 디폴트로 합니다. 즉 인스턴스는 21줄이 기본입니다.
3. 인용문 별 발화자 추론: 인스턴스 내에서 발화자 후보를 추리고, 발화자를 추론합니다.
'GitHub Repository': URL
!image/png
### 팀원
### 시스템 구성도
!image/png
### 모델 정보
1. Base model: koRoBERTa
2. NER model: 한국해양대학교 NER 모델 (MIT License) URL
| [
"### 팀원",
"### 시스템 구성도\n\n\n!image/png",
"### 모델 정보\n\n\n1. Base model: koRoBERTa\n2. NER model: 한국해양대학교 NER 모델 (MIT License) URL"
] | [
"TAGS\n#transformers #text-classification #ko #license-mit #endpoints_compatible #region-us \n",
"### 팀원",
"### 시스템 구성도\n\n\n!image/png",
"### 모델 정보\n\n\n1. Base model: koRoBERTa\n2. NER model: 한국해양대학교 NER 모델 (MIT License) URL"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/KaeriJenti/kaori-70b-v1
**No more quants are incoming, as llama.cpp crashes when generating them.**
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/kaori-70b-v1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/kaori-70b-v1-i1-GGUF/resolve/main/kaori-70b-v1.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/kaori-70b-v1-i1-GGUF/resolve/main/kaori-70b-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/kaori-70b-v1-i1-GGUF/resolve/main/kaori-70b-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/kaori-70b-v1-i1-GGUF/resolve/main/kaori-70b-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/kaori-70b-v1-i1-GGUF/resolve/main/kaori-70b-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-70b-v1-i1-GGUF/resolve/main/kaori-70b-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/kaori-70b-v1-i1-GGUF/resolve/main/kaori-70b-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/kaori-70b-v1-i1-GGUF/resolve/main/kaori-70b-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-70b-v1-i1-GGUF/resolve/main/kaori-70b-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/kaori-70b-v1-i1-GGUF/resolve/main/kaori-70b-v1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/kaori-70b-v1-i1-GGUF/resolve/main/kaori-70b-v1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "llama2", "library_name": "transformers", "base_model": "KaeriJenti/kaori-70b-v1", "no_imatrix": "GGML_ASSERT: llama.cpp/ggml-quants.c:11239: grid_index >= 0", "quantized_by": "mradermacher"} | mradermacher/kaori-70b-v1-i1-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:KaeriJenti/kaori-70b-v1",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T05:21:26+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-KaeriJenti/kaori-70b-v1 #license-llama2 #endpoints_compatible #region-us
| About
-----
weighted/imatrix quants of URL
No more quants are incoming, as URL crashes when generating them.
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-KaeriJenti/kaori-70b-v1 #license-llama2 #endpoints_compatible #region-us \n"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NHS-distilbert-binary-random
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5121
- Accuracy: 0.8019
- Precision: 0.7972
- Recall: 0.8065
- F1: 0.7988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.1385 | 1.0 | 397 | 0.4277 | 0.8069 | 0.8004 | 0.7989 | 0.7996 |
| 0.0481 | 2.0 | 794 | 0.4580 | 0.7931 | 0.7894 | 0.7990 | 0.7903 |
| 2.0213 | 3.0 | 1191 | 0.5121 | 0.8019 | 0.7972 | 0.8065 | 0.7988 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "NHS-distilbert-binary-random", "results": []}]} | intermezzo672/NHS-distilbert-binary-random | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T05:23:54+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| NHS-distilbert-binary-random
============================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5121
* Accuracy: 0.8019
* Precision: 0.7972
* Recall: 0.8065
* F1: 0.7988
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 6
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 6",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 6",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "tmp_trainer", "results": []}]} | ConnorLin/tmp_trainer | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T05:24:06+00:00 | [] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# tmp_trainer
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| [
"# tmp_trainer\n\nThis model was trained from scratch on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 2.0.0+cu118\n- Datasets 2.15.0\n- Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# tmp_trainer\n\nThis model was trained from scratch on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 2.0.0+cu118\n- Datasets 2.15.0\n- Tokenizers 0.15.0"
] |
text-to-video | transformers |
# video_generation_model
video_generation_model is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [TheBloke/openchat_3.5-GPTQ](https://huggingface.co/TheBloke/openchat_3.5-GPTQ)
* [dalle-mini/dalle-mini](https://huggingface.co/dalle-mini/dalle-mini)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: TheBloke/openchat_3.5-GPTQ
layer_range: [0, 32]
- model: dalle-mini/dalle-mini
layer_range: [0, 32]
merge_method: slerp
base_model: TheBloke/openchat_3.5-GPTQ
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "nagayama0706/video_generation_model"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "TheBloke/openchat_3.5-GPTQ", "dalle-mini/dalle-mini"], "base_model": ["TheBloke/openchat_3.5-GPTQ", "dalle-mini/dalle-mini"], "pipeline_tag": "text-to-video"} | nagayama0706/video_generation_model | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"TheBloke/openchat_3.5-GPTQ",
"dalle-mini/dalle-mini",
"text-to-video",
"base_model:TheBloke/openchat_3.5-GPTQ",
"base_model:dalle-mini/dalle-mini",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T05:24:07+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #TheBloke/openchat_3.5-GPTQ #dalle-mini/dalle-mini #text-to-video #base_model-TheBloke/openchat_3.5-GPTQ #base_model-dalle-mini/dalle-mini #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# video_generation_model
video_generation_model is a merge of the following models using LazyMergekit:
* TheBloke/openchat_3.5-GPTQ
* dalle-mini/dalle-mini
## Configuration
## Usage
| [
"# video_generation_model\n\nvideo_generation_model is a merge of the following models using LazyMergekit:\n* TheBloke/openchat_3.5-GPTQ\n* dalle-mini/dalle-mini",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #TheBloke/openchat_3.5-GPTQ #dalle-mini/dalle-mini #text-to-video #base_model-TheBloke/openchat_3.5-GPTQ #base_model-dalle-mini/dalle-mini #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# video_generation_model\n\nvideo_generation_model is a merge of the following models using LazyMergekit:\n* TheBloke/openchat_3.5-GPTQ\n* dalle-mini/dalle-mini",
"## Configuration",
"## Usage"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text-to-image finetuning - BlackSadadou/SD-Black-IMGgen-Model
This pipeline was finetuned from **CompVis/stable-diffusion-v1-4** on the **BlackSadadou/STBlackIMGgen** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ["Un Chat dans le style Black's style"]:

## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("BlackSadadou/SD-Black-IMGgen-Model", torch_dtype=torch.float16)
prompt = "Un Chat dans le style Black's style"
image = pipeline(prompt).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 2
* Learning rate: 1e-05
* Batch size: 1
* Gradient accumulation steps: 4
* Image resolution: 512
* Mixed-precision: fp16
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "diffusers-training"], "inference": true, "base_model": "CompVis/stable-diffusion-v1-4"} | BlackSadadou/SD-Black-IMGgen-Model | null | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"base_model:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-16T05:25:45+00:00 | [] | [] | TAGS
#diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #diffusers-training #base_model-CompVis/stable-diffusion-v1-4 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Text-to-image finetuning - BlackSadadou/SD-Black-IMGgen-Model
This pipeline was finetuned from CompVis/stable-diffusion-v1-4 on the BlackSadadou/STBlackIMGgen dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ["Un Chat dans le style Black's style"]:
!val_imgs_grid
## Pipeline usage
You can use the pipeline like so:
## Training info
These are the key hyperparameters used during training:
* Epochs: 2
* Learning rate: 1e-05
* Batch size: 1
* Gradient accumulation steps: 4
* Image resolution: 512
* Mixed-precision: fp16
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# Text-to-image finetuning - BlackSadadou/SD-Black-IMGgen-Model\n\nThis pipeline was finetuned from CompVis/stable-diffusion-v1-4 on the BlackSadadou/STBlackIMGgen dataset. Below are some example images generated with the finetuned pipeline using the following prompts: [\"Un Chat dans le style Black's style\"]: \n\n!val_imgs_grid",
"## Pipeline usage\n\nYou can use the pipeline like so:",
"## Training info\n\nThese are the key hyperparameters used during training:\n\n* Epochs: 2\n* Learning rate: 1e-05\n* Batch size: 1\n* Gradient accumulation steps: 4\n* Image resolution: 512\n* Mixed-precision: fp16",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #diffusers-training #base_model-CompVis/stable-diffusion-v1-4 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Text-to-image finetuning - BlackSadadou/SD-Black-IMGgen-Model\n\nThis pipeline was finetuned from CompVis/stable-diffusion-v1-4 on the BlackSadadou/STBlackIMGgen dataset. Below are some example images generated with the finetuned pipeline using the following prompts: [\"Un Chat dans le style Black's style\"]: \n\n!val_imgs_grid",
"## Pipeline usage\n\nYou can use the pipeline like so:",
"## Training info\n\nThese are the key hyperparameters used during training:\n\n* Epochs: 2\n* Learning rate: 1e-05\n* Batch size: 1\n* Gradient accumulation steps: 4\n* Image resolution: 512\n* Mixed-precision: fp16",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | swj0419/bbc_retrain_STEP0000020 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T05:26:07+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openhermes-mistral-dpo-gptq
This model is a fine-tuned version of [TheBloke/OpenHermes-2-Mistral-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5940
- Rewards/chosen: 0.0155
- Rewards/rejected: -0.3247
- Rewards/accuracies: 0.875
- Rewards/margins: 0.3402
- Logps/rejected: -329.5930
- Logps/chosen: -281.9707
- Logits/rejected: -2.2879
- Logits/chosen: -2.2211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6766 | 0.01 | 10 | 0.7047 | -0.0328 | -0.0733 | 0.5 | 0.0405 | -327.0790 | -282.4540 | -2.2913 | -2.2181 |
| 0.7059 | 0.01 | 20 | 0.6603 | -0.0641 | -0.2367 | 0.8125 | 0.1726 | -328.7130 | -282.7669 | -2.2896 | -2.2224 |
| 0.6913 | 0.01 | 30 | 0.6036 | 0.0238 | -0.2729 | 0.875 | 0.2968 | -329.0755 | -281.8873 | -2.2889 | -2.2248 |
| 0.7003 | 0.02 | 40 | 0.5921 | 0.0259 | -0.3104 | 0.875 | 0.3364 | -329.4504 | -281.8663 | -2.2881 | -2.2227 |
| 0.6585 | 0.03 | 50 | 0.5940 | 0.0155 | -0.3247 | 0.875 | 0.3402 | -329.5930 | -281.9707 | -2.2879 | -2.2211 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.0.1+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "TheBloke/OpenHermes-2-Mistral-7B-GPTQ", "model-index": [{"name": "openhermes-mistral-dpo-gptq", "results": []}]} | Gokul29/openhermes-mistral-dpo-gptq | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T05:26:59+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #dpo #generated_from_trainer #base_model-TheBloke/OpenHermes-2-Mistral-7B-GPTQ #license-apache-2.0 #region-us
| openhermes-mistral-dpo-gptq
===========================
This model is a fine-tuned version of TheBloke/OpenHermes-2-Mistral-7B-GPTQ on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5940
* Rewards/chosen: 0.0155
* Rewards/rejected: -0.3247
* Rewards/accuracies: 0.875
* Rewards/margins: 0.3402
* Logps/rejected: -329.5930
* Logps/chosen: -281.9707
* Logits/rejected: -2.2879
* Logits/chosen: -2.2211
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 1
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2
* training\_steps: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.38.2
* Pytorch 2.0.1+cu117
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* training\\_steps: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.0.1+cu117\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #dpo #generated_from_trainer #base_model-TheBloke/OpenHermes-2-Mistral-7B-GPTQ #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* training\\_steps: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.0.1+cu117\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_2-seqsight_8192_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8080
- F1 Score: 0.8018
- Accuracy: 0.8018
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1536
- eval_batch_size: 1536
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3068 | 100.0 | 200 | 0.9121 | 0.7926 | 0.7927 |
| 0.0498 | 200.0 | 400 | 1.2547 | 0.7713 | 0.7713 |
| 0.0238 | 300.0 | 600 | 1.3193 | 0.7744 | 0.7744 |
| 0.0136 | 400.0 | 800 | 1.4932 | 0.7801 | 0.7805 |
| 0.0096 | 500.0 | 1000 | 1.6597 | 0.7735 | 0.7744 |
| 0.0073 | 600.0 | 1200 | 1.6320 | 0.7590 | 0.7591 |
| 0.0056 | 700.0 | 1400 | 1.7398 | 0.7649 | 0.7652 |
| 0.0042 | 800.0 | 1600 | 1.9684 | 0.7667 | 0.7683 |
| 0.0037 | 900.0 | 1800 | 1.9289 | 0.7677 | 0.7683 |
| 0.0029 | 1000.0 | 2000 | 2.0433 | 0.7799 | 0.7805 |
| 0.0027 | 1100.0 | 2200 | 1.9544 | 0.7769 | 0.7774 |
| 0.0023 | 1200.0 | 2400 | 1.9650 | 0.7925 | 0.7927 |
| 0.0021 | 1300.0 | 2600 | 1.9799 | 0.7835 | 0.7835 |
| 0.0014 | 1400.0 | 2800 | 2.1683 | 0.7799 | 0.7805 |
| 0.0016 | 1500.0 | 3000 | 2.1625 | 0.7760 | 0.7774 |
| 0.0017 | 1600.0 | 3200 | 2.1244 | 0.7796 | 0.7805 |
| 0.0016 | 1700.0 | 3400 | 2.1270 | 0.7699 | 0.7713 |
| 0.0011 | 1800.0 | 3600 | 2.2024 | 0.7865 | 0.7866 |
| 0.0015 | 1900.0 | 3800 | 2.1920 | 0.7831 | 0.7835 |
| 0.0011 | 2000.0 | 4000 | 2.2344 | 0.7864 | 0.7866 |
| 0.0012 | 2100.0 | 4200 | 2.2660 | 0.7768 | 0.7774 |
| 0.0009 | 2200.0 | 4400 | 2.2521 | 0.7957 | 0.7957 |
| 0.001 | 2300.0 | 4600 | 2.0965 | 0.7710 | 0.7713 |
| 0.0007 | 2400.0 | 4800 | 2.2597 | 0.7647 | 0.7652 |
| 0.0008 | 2500.0 | 5000 | 2.1783 | 0.7863 | 0.7866 |
| 0.0008 | 2600.0 | 5200 | 2.1740 | 0.7764 | 0.7774 |
| 0.0007 | 2700.0 | 5400 | 2.2071 | 0.7731 | 0.7744 |
| 0.0008 | 2800.0 | 5600 | 2.1864 | 0.7735 | 0.7744 |
| 0.0005 | 2900.0 | 5800 | 2.3478 | 0.7801 | 0.7805 |
| 0.0006 | 3000.0 | 6000 | 2.3613 | 0.7769 | 0.7774 |
| 0.0006 | 3100.0 | 6200 | 2.4406 | 0.7640 | 0.7652 |
| 0.0006 | 3200.0 | 6400 | 2.3294 | 0.7804 | 0.7805 |
| 0.0004 | 3300.0 | 6600 | 2.4409 | 0.7709 | 0.7713 |
| 0.0005 | 3400.0 | 6800 | 2.4549 | 0.7673 | 0.7683 |
| 0.0004 | 3500.0 | 7000 | 2.4397 | 0.7796 | 0.7805 |
| 0.0004 | 3600.0 | 7200 | 2.3181 | 0.7770 | 0.7774 |
| 0.0004 | 3700.0 | 7400 | 2.3868 | 0.7835 | 0.7835 |
| 0.0003 | 3800.0 | 7600 | 2.4762 | 0.7678 | 0.7683 |
| 0.0004 | 3900.0 | 7800 | 2.4945 | 0.7796 | 0.7805 |
| 0.0003 | 4000.0 | 8000 | 2.4778 | 0.7771 | 0.7774 |
| 0.0003 | 4100.0 | 8200 | 2.5574 | 0.7799 | 0.7805 |
| 0.0003 | 4200.0 | 8400 | 2.6342 | 0.7794 | 0.7805 |
| 0.0002 | 4300.0 | 8600 | 2.6390 | 0.7803 | 0.7805 |
| 0.0003 | 4400.0 | 8800 | 2.5978 | 0.7832 | 0.7835 |
| 0.0002 | 4500.0 | 9000 | 2.6270 | 0.7798 | 0.7805 |
| 0.0001 | 4600.0 | 9200 | 2.6117 | 0.7860 | 0.7866 |
| 0.0002 | 4700.0 | 9400 | 2.6096 | 0.7771 | 0.7774 |
| 0.0002 | 4800.0 | 9600 | 2.6161 | 0.7771 | 0.7774 |
| 0.0002 | 4900.0 | 9800 | 2.5870 | 0.7801 | 0.7805 |
| 0.0002 | 5000.0 | 10000 | 2.5925 | 0.7830 | 0.7835 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_2-seqsight_8192_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_2-seqsight_8192_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-16T05:27:15+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_mouse\_2-seqsight\_8192\_512\_30M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.8080
* F1 Score: 0.8018
* Accuracy: 0.8018
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 1536
* eval\_batch\_size: 1536
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# JSL-MedMNX-7B-SFT
[<img src="https://repository-images.githubusercontent.com/104670986/2e728700-ace4-11ea-9cfc-f3e060b25ddf">](http://www.johnsnowlabs.com)
JSL-MedMNX-7B-SFT is a 7 Billion parameter model developed by [John Snow Labs](https://www.johnsnowlabs.com/).
This model is SFT-finetuned on alpaca format 11k medical dataset over the base model [JSL-MedMNX-7B](https://huggingface.co/johnsnowlabs/JSL-MedMNX-7B). Checkout the perofrmance on [Open Medical LLM Leaderboard](https://huggingface.co/spaces/openlifescienceai/open_medical_llm_leaderboard).
This model is available under a [CC-BY-NC-ND](https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en) license and must also conform to this [Acceptable Use Policy](https://huggingface.co/johnsnowlabs). If you need to license this model for commercial use, please contact us at [email protected].
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "johnsnowlabs/JSL-MedMNX-7B-SFT"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## 🏆 Evaluation
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------------------------|-------|------|-----:|--------|-----:|---|-----:|
|stem |N/A |none | 0|acc_norm|0.5209|± |0.0068|
| | |none | 0|acc |0.5675|± |0.0058|
| - medmcqa |Yaml |none | 0|acc |0.5152|± |0.0077|
| | |none | 0|acc_norm|0.5152|± |0.0077|
| - medqa_4options |Yaml |none | 0|acc |0.5397|± |0.0140|
| | |none | 0|acc_norm|0.5397|± |0.0140|
| - anatomy (mmlu) | 0|none | 0|acc |0.6593|± |0.0409|
| - clinical_knowledge (mmlu) | 0|none | 0|acc |0.7245|± |0.0275|
| - college_biology (mmlu) | 0|none | 0|acc |0.7431|± |0.0365|
| - college_medicine (mmlu) | 0|none | 0|acc |0.6532|± |0.0363|
| - medical_genetics (mmlu) | 0|none | 0|acc |0.7300|± |0.0446|
| - professional_medicine (mmlu)| 0|none | 0|acc |0.7206|± |0.0273|
| - pubmedqa | 1|none | 0|acc |0.7720|± |0.0188|
|Groups|Version|Filter|n-shot| Metric |Value | |Stderr|
|------|-------|------|-----:|--------|-----:|---|-----:|
|stem |N/A |none | 0|acc_norm|0.5209|± |0.0068|
| | |none | 0|acc |0.5675|± |0.0058| | {"language": ["en"], "license": "cc-by-nc-nd-4.0", "library_name": "transformers", "tags": ["reward model", "RLHF", "medical"]} | johnsnowlabs/JSL-MedMNX-7B-SFT | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"reward model",
"RLHF",
"medical",
"conversational",
"en",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T05:27:20+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mistral #text-generation #reward model #RLHF #medical #conversational #en #license-cc-by-nc-nd-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| JSL-MedMNX-7B-SFT
=================
<img src="URL
JSL-MedMNX-7B-SFT is a 7 Billion parameter model developed by John Snow Labs.
This model is SFT-finetuned on alpaca format 11k medical dataset over the base model JSL-MedMNX-7B. Checkout the perofrmance on Open Medical LLM Leaderboard.
This model is available under a CC-BY-NC-ND license and must also conform to this Acceptable Use Policy. If you need to license this model for commercial use, please contact us at info@URL.
Usage
-----
Evaluation
----------
| [] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #reward model #RLHF #medical #conversational #en #license-cc-by-nc-nd-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meditron-7b-wo-kqa_golden-sft
This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1778 | 0.89 | 6 | 1.0390 |
| 1.0295 | 1.93 | 13 | 0.8659 |
| 0.903 | 2.67 | 18 | 0.8405 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "llama2", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["HuggingFaceH4/deita-10k-v0-sft"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b-wo-kqa_golden-sft", "results": []}]} | Minbyul/meditron-7b-wo-kqa_golden-sft | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:epfl-llm/meditron-7b",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T05:27:40+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-epfl-llm/meditron-7b #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| meditron-7b-wo-kqa\_golden-sft
==============================
This model is a fine-tuned version of epfl-llm/meditron-7b on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8405
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.1.2
* Datasets 2.14.6
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-epfl-llm/meditron-7b #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | swj0419/bbc_STEP0000080 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T05:28:04+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.